I0906 20:13:12.445481 7 e2e.go:224] Starting e2e run "63905285-f07d-11ea-b72c-0242ac110008" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1599423191 - Will randomize all specs Will run 201 of 2164 specs Sep 6 20:13:12.623: INFO: >>> kubeConfig: /root/.kube/config Sep 6 20:13:12.625: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 6 20:13:12.640: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 6 20:13:12.668: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 6 20:13:12.668: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 6 20:13:12.668: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 6 20:13:12.680: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 6 20:13:12.680: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 6 20:13:12.680: INFO: e2e test version: v1.13.12 Sep 6 20:13:12.681: INFO: kube-apiserver version: v1.13.12 SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:13:12.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets Sep 6 20:13:12.781: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 6 20:13:12.843: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Sep 6 20:13:12.853: INFO: Number of nodes with available pods: 0 Sep 6 20:13:12.853: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Sep 6 20:13:12.886: INFO: Number of nodes with available pods: 0 Sep 6 20:13:12.886: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:13.889: INFO: Number of nodes with available pods: 0 Sep 6 20:13:13.889: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:14.890: INFO: Number of nodes with available pods: 0 Sep 6 20:13:14.890: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:15.890: INFO: Number of nodes with available pods: 0 Sep 6 20:13:15.890: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:16.890: INFO: Number of nodes with available pods: 0 Sep 6 20:13:16.890: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:17.889: INFO: Number of nodes with available pods: 0 Sep 6 20:13:17.889: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:18.890: INFO: Number of nodes with available pods: 0 Sep 6 20:13:18.890: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:19.890: INFO: Number of nodes with available pods: 0 Sep 6 20:13:19.890: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:20.890: INFO: Number of nodes with available pods: 1 Sep 6 20:13:20.890: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Sep 6 20:13:20.924: INFO: Number of nodes with available pods: 1 Sep 6 20:13:20.924: INFO: Number of running nodes: 0, number of available pods: 1 Sep 6 20:13:21.928: INFO: Number of nodes with available pods: 0 Sep 6 20:13:21.928: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Sep 6 20:13:21.944: INFO: Number of nodes with available pods: 0 Sep 6 20:13:21.944: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:22.949: INFO: Number of nodes with available pods: 0 Sep 6 20:13:22.949: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:23.948: INFO: Number of nodes with available pods: 0 Sep 6 20:13:23.948: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:24.949: INFO: Number of nodes with available pods: 0 Sep 6 20:13:24.949: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:25.950: INFO: Number of nodes with available pods: 0 Sep 6 20:13:25.950: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:26.949: INFO: Number of nodes with available pods: 0 Sep 6 20:13:26.949: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:27.948: INFO: Number of nodes with available pods: 0 Sep 6 20:13:27.948: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:28.948: INFO: Number of nodes with available pods: 0 Sep 6 20:13:28.948: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:29.949: INFO: Number of nodes with available pods: 0 Sep 6 20:13:29.949: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:30.949: INFO: Number of nodes with available pods: 0 Sep 6 20:13:30.949: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:31.976: INFO: Number of nodes with available pods: 0 Sep 6 20:13:31.976: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:32.982: INFO: Number of nodes with available pods: 0 Sep 6 20:13:32.982: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:33.948: INFO: Number of nodes with available pods: 1 Sep 6 20:13:33.948: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-mk68c, will wait for the garbage collector to delete the pods Sep 6 20:13:34.027: INFO: Deleting DaemonSet.extensions daemon-set took: 21.115962ms Sep 6 20:13:34.127: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.28731ms Sep 6 20:13:37.831: INFO: Number of nodes with available pods: 0 Sep 6 20:13:37.831: INFO: Number of running nodes: 0, number of available pods: 0 Sep 6 20:13:37.836: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mk68c/daemonsets","resourceVersion":"208061"},"items":null} Sep 6 20:13:37.838: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mk68c/pods","resourceVersion":"208061"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:13:37.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-mk68c" for this suite. Sep 6 20:13:43.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:13:43.917: INFO: namespace: e2e-tests-daemonsets-mk68c, resource: bindings, ignored listing per whitelist Sep 6 20:13:43.980: INFO: namespace e2e-tests-daemonsets-mk68c deletion completed in 6.102907514s • [SLOW TEST:31.298 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:13:43.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Sep 6 20:13:44.078: INFO: Waiting up to 5m0s for pod "client-containers-76ab082d-f07d-11ea-b72c-0242ac110008" in namespace "e2e-tests-containers-bpfp7" to be "success or failure" Sep 6 20:13:44.098: INFO: Pod "client-containers-76ab082d-f07d-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 20.274592ms Sep 6 20:13:46.149: INFO: Pod "client-containers-76ab082d-f07d-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071461954s Sep 6 20:13:48.167: INFO: Pod "client-containers-76ab082d-f07d-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089610557s Sep 6 20:13:50.171: INFO: Pod "client-containers-76ab082d-f07d-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 6.093095074s Sep 6 20:13:52.191: INFO: Pod "client-containers-76ab082d-f07d-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11316565s STEP: Saw pod success Sep 6 20:13:52.191: INFO: Pod "client-containers-76ab082d-f07d-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:13:52.194: INFO: Trying to get logs from node hunter-worker pod client-containers-76ab082d-f07d-11ea-b72c-0242ac110008 container test-container: STEP: delete the pod Sep 6 20:13:52.242: INFO: Waiting for pod client-containers-76ab082d-f07d-11ea-b72c-0242ac110008 to disappear Sep 6 20:13:52.255: INFO: Pod client-containers-76ab082d-f07d-11ea-b72c-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:13:52.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-bpfp7" for this suite. Sep 6 20:13:58.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:13:58.333: INFO: namespace: e2e-tests-containers-bpfp7, resource: bindings, ignored listing per whitelist Sep 6 20:13:58.342: INFO: namespace e2e-tests-containers-bpfp7 deletion completed in 6.083754366s • [SLOW TEST:14.362 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:13:58.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 6 20:13:58.463: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Sep 6 20:13:58.497: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:13:58.499: INFO: Number of nodes with available pods: 0 Sep 6 20:13:58.499: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:13:59.503: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:13:59.507: INFO: Number of nodes with available pods: 0 Sep 6 20:13:59.507: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:14:00.504: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:00.508: INFO: Number of nodes with available pods: 0 Sep 6 20:14:00.508: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:14:01.599: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:01.602: INFO: Number of nodes with available pods: 0 Sep 6 20:14:01.602: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:14:02.503: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:02.506: INFO: Number of nodes with available pods: 1 Sep 6 20:14:02.506: INFO: Node hunter-worker2 is running more than one daemon pod Sep 6 20:14:03.504: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:03.507: INFO: Number of nodes with available pods: 2 Sep 6 20:14:03.507: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Sep 6 20:14:03.544: INFO: Wrong image for pod: daemon-set-ggftj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:03.544: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:03.572: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:04.577: INFO: Wrong image for pod: daemon-set-ggftj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:04.577: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:04.579: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:05.578: INFO: Wrong image for pod: daemon-set-ggftj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:05.578: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:05.583: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:06.577: INFO: Wrong image for pod: daemon-set-ggftj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:06.577: INFO: Pod daemon-set-ggftj is not available Sep 6 20:14:06.577: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:06.581: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:07.576: INFO: Wrong image for pod: daemon-set-ggftj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:07.576: INFO: Pod daemon-set-ggftj is not available Sep 6 20:14:07.576: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:07.580: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:08.577: INFO: Wrong image for pod: daemon-set-ggftj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:08.577: INFO: Pod daemon-set-ggftj is not available Sep 6 20:14:08.577: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:08.580: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:09.575: INFO: Pod daemon-set-2mn6l is not available Sep 6 20:14:09.575: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:09.579: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:10.577: INFO: Pod daemon-set-2mn6l is not available Sep 6 20:14:10.577: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:10.580: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:11.576: INFO: Pod daemon-set-2mn6l is not available Sep 6 20:14:11.576: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:11.582: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:12.576: INFO: Pod daemon-set-2mn6l is not available Sep 6 20:14:12.576: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:12.580: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:13.577: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:13.580: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:14.576: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:14.579: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:15.577: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:15.577: INFO: Pod daemon-set-n6chw is not available Sep 6 20:14:15.581: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:16.577: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:16.577: INFO: Pod daemon-set-n6chw is not available Sep 6 20:14:16.581: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:17.577: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:17.577: INFO: Pod daemon-set-n6chw is not available Sep 6 20:14:17.580: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:18.575: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:18.575: INFO: Pod daemon-set-n6chw is not available Sep 6 20:14:18.578: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:19.577: INFO: Wrong image for pod: daemon-set-n6chw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Sep 6 20:14:19.577: INFO: Pod daemon-set-n6chw is not available Sep 6 20:14:19.581: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:20.577: INFO: Pod daemon-set-krnn6 is not available Sep 6 20:14:20.580: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Sep 6 20:14:20.582: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:20.585: INFO: Number of nodes with available pods: 1 Sep 6 20:14:20.585: INFO: Node hunter-worker2 is running more than one daemon pod Sep 6 20:14:21.589: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:21.592: INFO: Number of nodes with available pods: 1 Sep 6 20:14:21.592: INFO: Node hunter-worker2 is running more than one daemon pod Sep 6 20:14:22.593: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:22.680: INFO: Number of nodes with available pods: 1 Sep 6 20:14:22.680: INFO: Node hunter-worker2 is running more than one daemon pod Sep 6 20:14:23.589: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:23.591: INFO: Number of nodes with available pods: 1 Sep 6 20:14:23.591: INFO: Node hunter-worker2 is running more than one daemon pod Sep 6 20:14:24.590: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:24.593: INFO: Number of nodes with available pods: 1 Sep 6 20:14:24.593: INFO: Node hunter-worker2 is running more than one daemon pod Sep 6 20:14:25.588: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:25.590: INFO: Number of nodes with available pods: 1 Sep 6 20:14:25.590: INFO: Node hunter-worker2 is running more than one daemon pod Sep 6 20:14:26.588: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:26.590: INFO: Number of nodes with available pods: 1 Sep 6 20:14:26.590: INFO: Node hunter-worker2 is running more than one daemon pod Sep 6 20:14:27.689: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:27.692: INFO: Number of nodes with available pods: 1 Sep 6 20:14:27.692: INFO: Node hunter-worker2 is running more than one daemon pod Sep 6 20:14:28.589: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:14:28.592: INFO: Number of nodes with available pods: 2 Sep 6 20:14:28.592: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-q9gfp, will wait for the garbage collector to delete the pods Sep 6 20:14:28.664: INFO: Deleting DaemonSet.extensions daemon-set took: 5.36871ms Sep 6 20:14:28.865: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.231121ms Sep 6 20:14:40.168: INFO: Number of nodes with available pods: 0 Sep 6 20:14:40.168: INFO: Number of running nodes: 0, number of available pods: 0 Sep 6 20:14:40.171: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-q9gfp/daemonsets","resourceVersion":"208457"},"items":null} Sep 6 20:14:40.173: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-q9gfp/pods","resourceVersion":"208457"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:14:40.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-q9gfp" for this suite. Sep 6 20:14:46.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:14:46.279: INFO: namespace: e2e-tests-daemonsets-q9gfp, resource: bindings, ignored listing per whitelist Sep 6 20:14:46.287: INFO: namespace e2e-tests-daemonsets-q9gfp deletion completed in 6.08332477s • [SLOW TEST:47.945 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:14:46.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Sep 6 20:14:52.424: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-9bcd2a0c-f07d-11ea-b72c-0242ac110008,GenerateName:,Namespace:e2e-tests-events-2wr8r,SelfLink:/api/v1/namespaces/e2e-tests-events-2wr8r/pods/send-events-9bcd2a0c-f07d-11ea-b72c-0242ac110008,UID:9bd1c31e-f07d-11ea-b060-0242ac120006,ResourceVersion:208560,Generation:0,CreationTimestamp:2020-09-06 20:14:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 371388665,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-f4w8x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f4w8x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-f4w8x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001794d40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001794d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:14:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:14:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:14:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:14:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.12,StartTime:2020-09-06 20:14:46 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-09-06 20:14:51 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://14e70aeee3d9436cc9c0aff3a024d5bc69eb3e43634f9d8b8625261cd3aa70da}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Sep 6 20:14:54.429: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Sep 6 20:14:56.434: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:14:56.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-2wr8r" for this suite. Sep 6 20:15:42.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:15:42.510: INFO: namespace: e2e-tests-events-2wr8r, resource: bindings, ignored listing per whitelist Sep 6 20:15:42.618: INFO: namespace e2e-tests-events-2wr8r deletion completed in 46.153848834s • [SLOW TEST:56.330 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:15:42.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Sep 6 20:15:42.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-zfzfd' Sep 6 20:15:45.519: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Sep 6 20:15:45.519: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Sep 6 20:15:45.560: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Sep 6 20:15:45.564: INFO: scanned /root for discovery docs: Sep 6 20:15:45.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-zfzfd' Sep 6 20:16:01.495: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Sep 6 20:16:01.495: INFO: stdout: "Created e2e-test-nginx-rc-0fdc36268c5d4a773ee99362607e2b3a\nScaling up e2e-test-nginx-rc-0fdc36268c5d4a773ee99362607e2b3a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0fdc36268c5d4a773ee99362607e2b3a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0fdc36268c5d4a773ee99362607e2b3a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Sep 6 20:16:01.495: INFO: stdout: "Created e2e-test-nginx-rc-0fdc36268c5d4a773ee99362607e2b3a\nScaling up e2e-test-nginx-rc-0fdc36268c5d4a773ee99362607e2b3a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0fdc36268c5d4a773ee99362607e2b3a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0fdc36268c5d4a773ee99362607e2b3a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Sep 6 20:16:01.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-zfzfd' Sep 6 20:16:01.612: INFO: stderr: "" Sep 6 20:16:01.612: INFO: stdout: "e2e-test-nginx-rc-0fdc36268c5d4a773ee99362607e2b3a-w72xb " Sep 6 20:16:01.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0fdc36268c5d4a773ee99362607e2b3a-w72xb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zfzfd' Sep 6 20:16:01.737: INFO: stderr: "" Sep 6 20:16:01.737: INFO: stdout: "true" Sep 6 20:16:01.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0fdc36268c5d4a773ee99362607e2b3a-w72xb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zfzfd' Sep 6 20:16:01.836: INFO: stderr: "" Sep 6 20:16:01.836: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Sep 6 20:16:01.836: INFO: e2e-test-nginx-rc-0fdc36268c5d4a773ee99362607e2b3a-w72xb is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Sep 6 20:16:01.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-zfzfd' Sep 6 20:16:01.956: INFO: stderr: "" Sep 6 20:16:01.956: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:16:01.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zfzfd" for this suite. Sep 6 20:16:23.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:16:24.061: INFO: namespace: e2e-tests-kubectl-zfzfd, resource: bindings, ignored listing per whitelist Sep 6 20:16:24.070: INFO: namespace e2e-tests-kubectl-zfzfd deletion completed in 22.097382571s • [SLOW TEST:41.452 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:16:24.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 6 20:16:40.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:16:40.276: INFO: Pod pod-with-prestop-exec-hook still exists Sep 6 20:16:42.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:16:42.280: INFO: Pod pod-with-prestop-exec-hook still exists Sep 6 20:16:44.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:16:44.279: INFO: Pod pod-with-prestop-exec-hook still exists Sep 6 20:16:46.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:16:46.291: INFO: Pod pod-with-prestop-exec-hook still exists Sep 6 20:16:48.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:16:48.280: INFO: Pod pod-with-prestop-exec-hook still exists Sep 6 20:16:50.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:16:51.117: INFO: Pod pod-with-prestop-exec-hook still exists Sep 6 20:16:52.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:16:52.279: INFO: Pod pod-with-prestop-exec-hook still exists Sep 6 20:16:54.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:16:54.280: INFO: Pod pod-with-prestop-exec-hook still exists Sep 6 20:16:56.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:16:56.279: INFO: Pod pod-with-prestop-exec-hook still exists Sep 6 20:16:58.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:16:58.280: INFO: Pod pod-with-prestop-exec-hook still exists Sep 6 20:17:00.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:17:00.439: INFO: Pod pod-with-prestop-exec-hook still exists Sep 6 20:17:02.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:17:02.537: INFO: Pod pod-with-prestop-exec-hook still exists Sep 6 20:17:04.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:17:04.841: INFO: Pod pod-with-prestop-exec-hook still exists Sep 6 20:17:06.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 6 20:17:06.281: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:17:06.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8jg6w" for this suite. Sep 6 20:17:18.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:17:18.314: INFO: namespace: e2e-tests-container-lifecycle-hook-8jg6w, resource: bindings, ignored listing per whitelist Sep 6 20:17:18.372: INFO: namespace e2e-tests-container-lifecycle-hook-8jg6w deletion completed in 12.079921394s • [SLOW TEST:54.302 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:17:18.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 6 20:17:18.517: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:17:18.519: INFO: Number of nodes with available pods: 0 Sep 6 20:17:18.519: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:17:19.522: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:17:19.524: INFO: Number of nodes with available pods: 0 Sep 6 20:17:19.524: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:17:21.813: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:17:21.967: INFO: Number of nodes with available pods: 0 Sep 6 20:17:21.967: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:17:22.524: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:17:22.527: INFO: Number of nodes with available pods: 0 Sep 6 20:17:22.527: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:17:23.522: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:17:23.524: INFO: Number of nodes with available pods: 0 Sep 6 20:17:23.524: INFO: Node hunter-worker is running more than one daemon pod Sep 6 20:17:24.523: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:17:24.525: INFO: Number of nodes with available pods: 2 Sep 6 20:17:24.525: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Sep 6 20:17:24.541: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 6 20:17:24.556: INFO: Number of nodes with available pods: 2 Sep 6 20:17:24.556: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-qgmbr, will wait for the garbage collector to delete the pods Sep 6 20:17:25.743: INFO: Deleting DaemonSet.extensions daemon-set took: 62.726595ms Sep 6 20:17:25.844: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.262154ms Sep 6 20:17:40.147: INFO: Number of nodes with available pods: 0 Sep 6 20:17:40.147: INFO: Number of running nodes: 0, number of available pods: 0 Sep 6 20:17:40.150: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-qgmbr/daemonsets","resourceVersion":"209289"},"items":null} Sep 6 20:17:40.152: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-qgmbr/pods","resourceVersion":"209289"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:17:40.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-qgmbr" for this suite. Sep 6 20:17:46.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:17:46.240: INFO: namespace: e2e-tests-daemonsets-qgmbr, resource: bindings, ignored listing per whitelist Sep 6 20:17:46.273: INFO: namespace e2e-tests-daemonsets-qgmbr deletion completed in 6.111581636s • [SLOW TEST:27.901 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:17:46.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-0714ecba-f07e-11ea-b72c-0242ac110008 STEP: Creating a pod to test consume configMaps Sep 6 20:17:46.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008" in namespace "e2e-tests-configmap-22nlt" to be "success or failure" Sep 6 20:17:46.380: INFO: Pod "pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.526293ms Sep 6 20:17:48.383: INFO: Pod "pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009003511s Sep 6 20:17:50.391: INFO: Pod "pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017096164s Sep 6 20:17:52.875: INFO: Pod "pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.50140725s Sep 6 20:17:54.879: INFO: Pod "pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.504708655s Sep 6 20:17:56.883: INFO: Pod "pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.508771389s Sep 6 20:17:58.885: INFO: Pod "pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.510955094s Sep 6 20:18:01.034: INFO: Pod "pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.659502961s Sep 6 20:18:03.898: INFO: Pod "pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.523470455s Sep 6 20:18:06.249: INFO: Pod "pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.875282413s Sep 6 20:18:08.252: INFO: Pod "pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.878381959s STEP: Saw pod success Sep 6 20:18:08.252: INFO: Pod "pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:18:08.255: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008 container configmap-volume-test: STEP: delete the pod Sep 6 20:18:08.318: INFO: Waiting for pod pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008 to disappear Sep 6 20:18:08.347: INFO: Pod pod-configmaps-07166e19-f07e-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:18:08.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-22nlt" for this suite. Sep 6 20:18:14.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:18:14.419: INFO: namespace: e2e-tests-configmap-22nlt, resource: bindings, ignored listing per whitelist Sep 6 20:18:14.450: INFO: namespace e2e-tests-configmap-22nlt deletion completed in 6.100082271s • [SLOW TEST:28.177 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:18:14.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 6 20:18:14.593: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17e87456-f07e-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-xgt8p" to be "success or failure" Sep 6 20:18:14.614: INFO: Pod "downwardapi-volume-17e87456-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 20.520859ms Sep 6 20:18:16.618: INFO: Pod "downwardapi-volume-17e87456-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024198276s Sep 6 20:18:18.621: INFO: Pod "downwardapi-volume-17e87456-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027580215s Sep 6 20:18:20.630: INFO: Pod "downwardapi-volume-17e87456-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037032432s Sep 6 20:18:22.634: INFO: Pod "downwardapi-volume-17e87456-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040228248s Sep 6 20:18:24.636: INFO: Pod "downwardapi-volume-17e87456-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.042885686s Sep 6 20:18:26.795: INFO: Pod "downwardapi-volume-17e87456-f07e-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 12.201419698s Sep 6 20:18:29.015: INFO: Pod "downwardapi-volume-17e87456-f07e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.421854633s STEP: Saw pod success Sep 6 20:18:29.015: INFO: Pod "downwardapi-volume-17e87456-f07e-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:18:29.018: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-17e87456-f07e-11ea-b72c-0242ac110008 container client-container: STEP: delete the pod Sep 6 20:18:29.621: INFO: Waiting for pod downwardapi-volume-17e87456-f07e-11ea-b72c-0242ac110008 to disappear Sep 6 20:18:29.691: INFO: Pod downwardapi-volume-17e87456-f07e-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:18:29.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xgt8p" for this suite. Sep 6 20:18:37.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:18:38.031: INFO: namespace: e2e-tests-downward-api-xgt8p, resource: bindings, ignored listing per whitelist Sep 6 20:18:38.035: INFO: namespace e2e-tests-downward-api-xgt8p deletion completed in 8.162484293s • [SLOW TEST:23.585 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:18:38.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-260220f3-f07e-11ea-b72c-0242ac110008 STEP: Creating a pod to test consume configMaps Sep 6 20:18:38.317: INFO: Waiting up to 5m0s for pod "pod-configmaps-2604abb8-f07e-11ea-b72c-0242ac110008" in namespace "e2e-tests-configmap-65dhr" to be "success or failure" Sep 6 20:18:38.363: INFO: Pod "pod-configmaps-2604abb8-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 46.098536ms Sep 6 20:18:41.272: INFO: Pod "pod-configmaps-2604abb8-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.95476555s Sep 6 20:18:43.282: INFO: Pod "pod-configmaps-2604abb8-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.964846501s Sep 6 20:18:45.285: INFO: Pod "pod-configmaps-2604abb8-f07e-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 6.968194101s Sep 6 20:18:47.295: INFO: Pod "pod-configmaps-2604abb8-f07e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.97801717s STEP: Saw pod success Sep 6 20:18:47.295: INFO: Pod "pod-configmaps-2604abb8-f07e-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:18:47.297: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-2604abb8-f07e-11ea-b72c-0242ac110008 container configmap-volume-test: STEP: delete the pod Sep 6 20:18:47.365: INFO: Waiting for pod pod-configmaps-2604abb8-f07e-11ea-b72c-0242ac110008 to disappear Sep 6 20:18:47.390: INFO: Pod pod-configmaps-2604abb8-f07e-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:18:47.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-65dhr" for this suite. Sep 6 20:18:53.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:18:53.464: INFO: namespace: e2e-tests-configmap-65dhr, resource: bindings, ignored listing per whitelist Sep 6 20:18:53.662: INFO: namespace e2e-tests-configmap-65dhr deletion completed in 6.26872638s • [SLOW TEST:15.626 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:18:53.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Sep 6 20:18:54.212: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:18:54.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xhc79" for this suite. Sep 6 20:19:00.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:19:00.594: INFO: namespace: e2e-tests-kubectl-xhc79, resource: bindings, ignored listing per whitelist Sep 6 20:19:00.597: INFO: namespace e2e-tests-kubectl-xhc79 deletion completed in 6.301982159s • [SLOW TEST:6.935 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:19:00.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-3366365b-f07e-11ea-b72c-0242ac110008 STEP: Creating a pod to test consume secrets Sep 6 20:19:00.723: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-33671093-f07e-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-9zq6z" to be "success or failure" Sep 6 20:19:00.728: INFO: Pod "pod-projected-secrets-33671093-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289136ms Sep 6 20:19:02.731: INFO: Pod "pod-projected-secrets-33671093-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007130825s Sep 6 20:19:04.734: INFO: Pod "pod-projected-secrets-33671093-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010431686s Sep 6 20:19:06.738: INFO: Pod "pod-projected-secrets-33671093-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014243836s Sep 6 20:19:08.978: INFO: Pod "pod-projected-secrets-33671093-f07e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.254784597s STEP: Saw pod success Sep 6 20:19:08.978: INFO: Pod "pod-projected-secrets-33671093-f07e-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:19:08.981: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-33671093-f07e-11ea-b72c-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Sep 6 20:19:09.094: INFO: Waiting for pod pod-projected-secrets-33671093-f07e-11ea-b72c-0242ac110008 to disappear Sep 6 20:19:09.097: INFO: Pod pod-projected-secrets-33671093-f07e-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:19:09.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9zq6z" for this suite. Sep 6 20:19:15.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:19:15.121: INFO: namespace: e2e-tests-projected-9zq6z, resource: bindings, ignored listing per whitelist Sep 6 20:19:15.168: INFO: namespace e2e-tests-projected-9zq6z deletion completed in 6.068408416s • [SLOW TEST:14.571 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:19:15.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-3c16a0b1-f07e-11ea-b72c-0242ac110008 STEP: Creating a pod to test consume secrets Sep 6 20:19:15.304: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3c180919-f07e-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-9wgjj" to be "success or failure" Sep 6 20:19:15.309: INFO: Pod "pod-projected-secrets-3c180919-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.389783ms Sep 6 20:19:17.343: INFO: Pod "pod-projected-secrets-3c180919-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0380321s Sep 6 20:19:19.870: INFO: Pod "pod-projected-secrets-3c180919-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.565567309s Sep 6 20:19:21.874: INFO: Pod "pod-projected-secrets-3c180919-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.569252507s Sep 6 20:19:23.877: INFO: Pod "pod-projected-secrets-3c180919-f07e-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.572181472s Sep 6 20:19:25.880: INFO: Pod "pod-projected-secrets-3c180919-f07e-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 10.575164639s Sep 6 20:19:27.883: INFO: Pod "pod-projected-secrets-3c180919-f07e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.578379315s STEP: Saw pod success Sep 6 20:19:27.883: INFO: Pod "pod-projected-secrets-3c180919-f07e-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:19:27.885: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-3c180919-f07e-11ea-b72c-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Sep 6 20:19:28.418: INFO: Waiting for pod pod-projected-secrets-3c180919-f07e-11ea-b72c-0242ac110008 to disappear Sep 6 20:19:29.449: INFO: Pod pod-projected-secrets-3c180919-f07e-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:19:29.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9wgjj" for this suite. Sep 6 20:19:35.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:19:35.763: INFO: namespace: e2e-tests-projected-9wgjj, resource: bindings, ignored listing per whitelist Sep 6 20:19:35.795: INFO: namespace e2e-tests-projected-9wgjj deletion completed in 6.342197504s • [SLOW TEST:20.627 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:19:35.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Sep 6 20:19:54.055: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5fkgp PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 6 20:19:54.055: INFO: >>> kubeConfig: /root/.kube/config I0906 20:19:54.109439 7 log.go:172] (0xc0013962c0) (0xc0013f9ea0) Create stream I0906 20:19:54.109471 7 log.go:172] (0xc0013962c0) (0xc0013f9ea0) Stream added, broadcasting: 1 I0906 20:19:54.111115 7 log.go:172] (0xc0013962c0) Reply frame received for 1 I0906 20:19:54.111166 7 log.go:172] (0xc0013962c0) (0xc0008b0d20) Create stream I0906 20:19:54.111183 7 log.go:172] (0xc0013962c0) (0xc0008b0d20) Stream added, broadcasting: 3 I0906 20:19:54.112246 7 log.go:172] (0xc0013962c0) Reply frame received for 3 I0906 20:19:54.112320 7 log.go:172] (0xc0013962c0) (0xc0008b0e60) Create stream I0906 20:19:54.112349 7 log.go:172] (0xc0013962c0) (0xc0008b0e60) Stream added, broadcasting: 5 I0906 20:19:54.113244 7 log.go:172] (0xc0013962c0) Reply frame received for 5 I0906 20:19:54.200343 7 log.go:172] (0xc0013962c0) Data frame received for 5 I0906 20:19:54.200387 7 log.go:172] (0xc0008b0e60) (5) Data frame handling I0906 20:19:54.200427 7 log.go:172] (0xc0013962c0) Data frame received for 3 I0906 20:19:54.200448 7 log.go:172] (0xc0008b0d20) (3) Data frame handling I0906 20:19:54.200454 7 log.go:172] (0xc0008b0d20) (3) Data frame sent I0906 20:19:54.200470 7 log.go:172] (0xc0013962c0) Data frame received for 3 I0906 20:19:54.200476 7 log.go:172] (0xc0008b0d20) (3) Data frame handling I0906 20:19:54.202082 7 log.go:172] (0xc0013962c0) Data frame received for 1 I0906 20:19:54.202177 7 log.go:172] (0xc0013f9ea0) (1) Data frame handling I0906 20:19:54.202195 7 log.go:172] (0xc0013f9ea0) (1) Data frame sent I0906 20:19:54.202209 7 log.go:172] (0xc0013962c0) (0xc0013f9ea0) Stream removed, broadcasting: 1 I0906 20:19:54.202284 7 log.go:172] (0xc0013962c0) (0xc0013f9ea0) Stream removed, broadcasting: 1 I0906 20:19:54.202296 7 log.go:172] (0xc0013962c0) (0xc0008b0d20) Stream removed, broadcasting: 3 I0906 20:19:54.202305 7 log.go:172] (0xc0013962c0) (0xc0008b0e60) Stream removed, broadcasting: 5 Sep 6 20:19:54.202: INFO: Exec stderr: "" Sep 6 20:19:54.202: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5fkgp PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 6 20:19:54.202: INFO: >>> kubeConfig: /root/.kube/config I0906 20:19:54.204134 7 log.go:172] (0xc0013962c0) Go away received I0906 20:19:54.234556 7 log.go:172] (0xc0008fda20) (0xc0008b0fa0) Create stream I0906 20:19:54.234614 7 log.go:172] (0xc0008fda20) (0xc0008b0fa0) Stream added, broadcasting: 1 I0906 20:19:54.237122 7 log.go:172] (0xc0008fda20) Reply frame received for 1 I0906 20:19:54.237154 7 log.go:172] (0xc0008fda20) (0xc0012b2820) Create stream I0906 20:19:54.237171 7 log.go:172] (0xc0008fda20) (0xc0012b2820) Stream added, broadcasting: 3 I0906 20:19:54.239033 7 log.go:172] (0xc0008fda20) Reply frame received for 3 I0906 20:19:54.239081 7 log.go:172] (0xc0008fda20) (0xc00125c280) Create stream I0906 20:19:54.239108 7 log.go:172] (0xc0008fda20) (0xc00125c280) Stream added, broadcasting: 5 I0906 20:19:54.243663 7 log.go:172] (0xc0008fda20) Reply frame received for 5 I0906 20:19:54.395900 7 log.go:172] (0xc0008fda20) Data frame received for 5 I0906 20:19:54.395951 7 log.go:172] (0xc00125c280) (5) Data frame handling I0906 20:19:54.395987 7 log.go:172] (0xc0008fda20) Data frame received for 3 I0906 20:19:54.396021 7 log.go:172] (0xc0012b2820) (3) Data frame handling I0906 20:19:54.396054 7 log.go:172] (0xc0012b2820) (3) Data frame sent I0906 20:19:54.396079 7 log.go:172] (0xc0008fda20) Data frame received for 3 I0906 20:19:54.396104 7 log.go:172] (0xc0012b2820) (3) Data frame handling I0906 20:19:54.401386 7 log.go:172] (0xc0008fda20) Data frame received for 1 I0906 20:19:54.401413 7 log.go:172] (0xc0008b0fa0) (1) Data frame handling I0906 20:19:54.401429 7 log.go:172] (0xc0008b0fa0) (1) Data frame sent I0906 20:19:54.402054 7 log.go:172] (0xc0008fda20) (0xc0008b0fa0) Stream removed, broadcasting: 1 I0906 20:19:54.402162 7 log.go:172] (0xc0008fda20) (0xc0008b0fa0) Stream removed, broadcasting: 1 I0906 20:19:54.402193 7 log.go:172] (0xc0008fda20) (0xc0012b2820) Stream removed, broadcasting: 3 I0906 20:19:54.402342 7 log.go:172] (0xc0008fda20) (0xc00125c280) Stream removed, broadcasting: 5 Sep 6 20:19:54.402: INFO: Exec stderr: "" Sep 6 20:19:54.402: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5fkgp PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 6 20:19:54.402: INFO: >>> kubeConfig: /root/.kube/config I0906 20:19:54.439154 7 log.go:172] (0xc0008fdef0) (0xc0008b1220) Create stream I0906 20:19:54.439183 7 log.go:172] (0xc0008fdef0) (0xc0008b1220) Stream added, broadcasting: 1 I0906 20:19:54.440808 7 log.go:172] (0xc0008fdef0) Reply frame received for 1 I0906 20:19:54.440832 7 log.go:172] (0xc0008fdef0) (0xc00125c320) Create stream I0906 20:19:54.440840 7 log.go:172] (0xc0008fdef0) (0xc00125c320) Stream added, broadcasting: 3 I0906 20:19:54.441559 7 log.go:172] (0xc0008fdef0) Reply frame received for 3 I0906 20:19:54.441592 7 log.go:172] (0xc0008fdef0) (0xc00125c3c0) Create stream I0906 20:19:54.441603 7 log.go:172] (0xc0008fdef0) (0xc00125c3c0) Stream added, broadcasting: 5 I0906 20:19:54.442323 7 log.go:172] (0xc0008fdef0) Reply frame received for 5 I0906 20:19:54.515364 7 log.go:172] (0xc0008fdef0) Data frame received for 5 I0906 20:19:54.515400 7 log.go:172] (0xc00125c3c0) (5) Data frame handling I0906 20:19:54.515435 7 log.go:172] (0xc0008fdef0) Data frame received for 3 I0906 20:19:54.515457 7 log.go:172] (0xc00125c320) (3) Data frame handling I0906 20:19:54.515474 7 log.go:172] (0xc00125c320) (3) Data frame sent I0906 20:19:54.515486 7 log.go:172] (0xc0008fdef0) Data frame received for 3 I0906 20:19:54.515510 7 log.go:172] (0xc00125c320) (3) Data frame handling I0906 20:19:54.516142 7 log.go:172] (0xc0008fdef0) Data frame received for 1 I0906 20:19:54.516172 7 log.go:172] (0xc0008b1220) (1) Data frame handling I0906 20:19:54.516201 7 log.go:172] (0xc0008b1220) (1) Data frame sent I0906 20:19:54.516271 7 log.go:172] (0xc0008fdef0) (0xc0008b1220) Stream removed, broadcasting: 1 I0906 20:19:54.516289 7 log.go:172] (0xc0008fdef0) Go away received I0906 20:19:54.516412 7 log.go:172] (0xc0008fdef0) (0xc0008b1220) Stream removed, broadcasting: 1 I0906 20:19:54.516442 7 log.go:172] (0xc0008fdef0) (0xc00125c320) Stream removed, broadcasting: 3 I0906 20:19:54.516464 7 log.go:172] (0xc0008fdef0) (0xc00125c3c0) Stream removed, broadcasting: 5 Sep 6 20:19:54.516: INFO: Exec stderr: "" Sep 6 20:19:54.516: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5fkgp PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 6 20:19:54.516: INFO: >>> kubeConfig: /root/.kube/config I0906 20:19:54.544376 7 log.go:172] (0xc0013968f0) (0xc00125c640) Create stream I0906 20:19:54.544398 7 log.go:172] (0xc0013968f0) (0xc00125c640) Stream added, broadcasting: 1 I0906 20:19:54.546426 7 log.go:172] (0xc0013968f0) Reply frame received for 1 I0906 20:19:54.546452 7 log.go:172] (0xc0013968f0) (0xc001f6b7c0) Create stream I0906 20:19:54.546465 7 log.go:172] (0xc0013968f0) (0xc001f6b7c0) Stream added, broadcasting: 3 I0906 20:19:54.547275 7 log.go:172] (0xc0013968f0) Reply frame received for 3 I0906 20:19:54.547326 7 log.go:172] (0xc0013968f0) (0xc00181f040) Create stream I0906 20:19:54.547345 7 log.go:172] (0xc0013968f0) (0xc00181f040) Stream added, broadcasting: 5 I0906 20:19:54.548123 7 log.go:172] (0xc0013968f0) Reply frame received for 5 I0906 20:19:54.604721 7 log.go:172] (0xc0013968f0) Data frame received for 5 I0906 20:19:54.604759 7 log.go:172] (0xc00181f040) (5) Data frame handling I0906 20:19:54.604781 7 log.go:172] (0xc0013968f0) Data frame received for 3 I0906 20:19:54.604792 7 log.go:172] (0xc001f6b7c0) (3) Data frame handling I0906 20:19:54.604805 7 log.go:172] (0xc001f6b7c0) (3) Data frame sent I0906 20:19:54.604819 7 log.go:172] (0xc0013968f0) Data frame received for 3 I0906 20:19:54.604831 7 log.go:172] (0xc001f6b7c0) (3) Data frame handling I0906 20:19:54.605997 7 log.go:172] (0xc0013968f0) Data frame received for 1 I0906 20:19:54.606034 7 log.go:172] (0xc00125c640) (1) Data frame handling I0906 20:19:54.606074 7 log.go:172] (0xc00125c640) (1) Data frame sent I0906 20:19:54.606099 7 log.go:172] (0xc0013968f0) (0xc00125c640) Stream removed, broadcasting: 1 I0906 20:19:54.606164 7 log.go:172] (0xc0013968f0) Go away received I0906 20:19:54.606200 7 log.go:172] (0xc0013968f0) (0xc00125c640) Stream removed, broadcasting: 1 I0906 20:19:54.606239 7 log.go:172] (0xc0013968f0) (0xc001f6b7c0) Stream removed, broadcasting: 3 I0906 20:19:54.606258 7 log.go:172] (0xc0013968f0) (0xc00181f040) Stream removed, broadcasting: 5 Sep 6 20:19:54.606: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Sep 6 20:19:54.606: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5fkgp PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 6 20:19:54.606: INFO: >>> kubeConfig: /root/.kube/config I0906 20:19:54.633927 7 log.go:172] (0xc00045fad0) (0xc0012b2aa0) Create stream I0906 20:19:54.633958 7 log.go:172] (0xc00045fad0) (0xc0012b2aa0) Stream added, broadcasting: 1 I0906 20:19:54.642117 7 log.go:172] (0xc00045fad0) Reply frame received for 1 I0906 20:19:54.642164 7 log.go:172] (0xc00045fad0) (0xc001e58000) Create stream I0906 20:19:54.642182 7 log.go:172] (0xc00045fad0) (0xc001e58000) Stream added, broadcasting: 3 I0906 20:19:54.643085 7 log.go:172] (0xc00045fad0) Reply frame received for 3 I0906 20:19:54.643107 7 log.go:172] (0xc00045fad0) (0xc001a68000) Create stream I0906 20:19:54.643116 7 log.go:172] (0xc00045fad0) (0xc001a68000) Stream added, broadcasting: 5 I0906 20:19:54.643726 7 log.go:172] (0xc00045fad0) Reply frame received for 5 I0906 20:19:54.685211 7 log.go:172] (0xc00045fad0) Data frame received for 5 I0906 20:19:54.685260 7 log.go:172] (0xc001a68000) (5) Data frame handling I0906 20:19:54.685291 7 log.go:172] (0xc00045fad0) Data frame received for 3 I0906 20:19:54.685305 7 log.go:172] (0xc001e58000) (3) Data frame handling I0906 20:19:54.685325 7 log.go:172] (0xc001e58000) (3) Data frame sent I0906 20:19:54.685340 7 log.go:172] (0xc00045fad0) Data frame received for 3 I0906 20:19:54.685351 7 log.go:172] (0xc001e58000) (3) Data frame handling I0906 20:19:54.686244 7 log.go:172] (0xc00045fad0) Data frame received for 1 I0906 20:19:54.686271 7 log.go:172] (0xc0012b2aa0) (1) Data frame handling I0906 20:19:54.686284 7 log.go:172] (0xc0012b2aa0) (1) Data frame sent I0906 20:19:54.686296 7 log.go:172] (0xc00045fad0) (0xc0012b2aa0) Stream removed, broadcasting: 1 I0906 20:19:54.686331 7 log.go:172] (0xc00045fad0) Go away received I0906 20:19:54.686361 7 log.go:172] (0xc00045fad0) (0xc0012b2aa0) Stream removed, broadcasting: 1 I0906 20:19:54.686380 7 log.go:172] (0xc00045fad0) (0xc001e58000) Stream removed, broadcasting: 3 I0906 20:19:54.686424 7 log.go:172] Streams opened: 1, map[spdy.StreamId]*spdystream.Stream{0x5:(*spdystream.Stream)(0xc001a68000)} I0906 20:19:54.686453 7 log.go:172] (0xc00045fad0) (0xc001a68000) Stream removed, broadcasting: 5 Sep 6 20:19:54.686: INFO: Exec stderr: "" Sep 6 20:19:54.686: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5fkgp PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 6 20:19:54.686: INFO: >>> kubeConfig: /root/.kube/config I0906 20:19:54.713498 7 log.go:172] (0xc0008fda20) (0xc000612320) Create stream I0906 20:19:54.713519 7 log.go:172] (0xc0008fda20) (0xc000612320) Stream added, broadcasting: 1 I0906 20:19:54.716779 7 log.go:172] (0xc0008fda20) Reply frame received for 1 I0906 20:19:54.716820 7 log.go:172] (0xc0008fda20) (0xc00089c0a0) Create stream I0906 20:19:54.716838 7 log.go:172] (0xc0008fda20) (0xc00089c0a0) Stream added, broadcasting: 3 I0906 20:19:54.719213 7 log.go:172] (0xc0008fda20) Reply frame received for 3 I0906 20:19:54.719259 7 log.go:172] (0xc0008fda20) (0xc000c44000) Create stream I0906 20:19:54.719281 7 log.go:172] (0xc0008fda20) (0xc000c44000) Stream added, broadcasting: 5 I0906 20:19:54.720301 7 log.go:172] (0xc0008fda20) Reply frame received for 5 I0906 20:19:54.786113 7 log.go:172] (0xc0008fda20) Data frame received for 5 I0906 20:19:54.786163 7 log.go:172] (0xc000c44000) (5) Data frame handling I0906 20:19:54.786205 7 log.go:172] (0xc0008fda20) Data frame received for 3 I0906 20:19:54.786221 7 log.go:172] (0xc00089c0a0) (3) Data frame handling I0906 20:19:54.786239 7 log.go:172] (0xc00089c0a0) (3) Data frame sent I0906 20:19:54.786256 7 log.go:172] (0xc0008fda20) Data frame received for 3 I0906 20:19:54.786272 7 log.go:172] (0xc00089c0a0) (3) Data frame handling I0906 20:19:54.787513 7 log.go:172] (0xc0008fda20) Data frame received for 1 I0906 20:19:54.787535 7 log.go:172] (0xc000612320) (1) Data frame handling I0906 20:19:54.787547 7 log.go:172] (0xc000612320) (1) Data frame sent I0906 20:19:54.787569 7 log.go:172] (0xc0008fda20) (0xc000612320) Stream removed, broadcasting: 1 I0906 20:19:54.787601 7 log.go:172] (0xc0008fda20) Go away received I0906 20:19:54.787693 7 log.go:172] (0xc0008fda20) (0xc000612320) Stream removed, broadcasting: 1 I0906 20:19:54.787728 7 log.go:172] (0xc0008fda20) (0xc00089c0a0) Stream removed, broadcasting: 3 I0906 20:19:54.787750 7 log.go:172] (0xc0008fda20) (0xc000c44000) Stream removed, broadcasting: 5 Sep 6 20:19:54.787: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Sep 6 20:19:54.787: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5fkgp PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 6 20:19:54.787: INFO: >>> kubeConfig: /root/.kube/config I0906 20:19:54.834880 7 log.go:172] (0xc00045f760) (0xc00089c820) Create stream I0906 20:19:54.834911 7 log.go:172] (0xc00045f760) (0xc00089c820) Stream added, broadcasting: 1 I0906 20:19:54.836202 7 log.go:172] (0xc00045f760) Reply frame received for 1 I0906 20:19:54.836241 7 log.go:172] (0xc00045f760) (0xc00089c960) Create stream I0906 20:19:54.836265 7 log.go:172] (0xc00045f760) (0xc00089c960) Stream added, broadcasting: 3 I0906 20:19:54.837184 7 log.go:172] (0xc00045f760) Reply frame received for 3 I0906 20:19:54.837222 7 log.go:172] (0xc00045f760) (0xc00089caa0) Create stream I0906 20:19:54.837236 7 log.go:172] (0xc00045f760) (0xc00089caa0) Stream added, broadcasting: 5 I0906 20:19:54.837924 7 log.go:172] (0xc00045f760) Reply frame received for 5 I0906 20:19:54.892299 7 log.go:172] (0xc00045f760) Data frame received for 5 I0906 20:19:54.892343 7 log.go:172] (0xc00089caa0) (5) Data frame handling I0906 20:19:54.892385 7 log.go:172] (0xc00045f760) Data frame received for 3 I0906 20:19:54.892415 7 log.go:172] (0xc00089c960) (3) Data frame handling I0906 20:19:54.892450 7 log.go:172] (0xc00089c960) (3) Data frame sent I0906 20:19:54.892471 7 log.go:172] (0xc00045f760) Data frame received for 3 I0906 20:19:54.892491 7 log.go:172] (0xc00089c960) (3) Data frame handling I0906 20:19:54.894031 7 log.go:172] (0xc00045f760) Data frame received for 1 I0906 20:19:54.894061 7 log.go:172] (0xc00089c820) (1) Data frame handling I0906 20:19:54.894101 7 log.go:172] (0xc00089c820) (1) Data frame sent I0906 20:19:54.894124 7 log.go:172] (0xc00045f760) (0xc00089c820) Stream removed, broadcasting: 1 I0906 20:19:54.894203 7 log.go:172] (0xc00045f760) Go away received I0906 20:19:54.894232 7 log.go:172] (0xc00045f760) (0xc00089c820) Stream removed, broadcasting: 1 I0906 20:19:54.894259 7 log.go:172] (0xc00045f760) (0xc00089c960) Stream removed, broadcasting: 3 I0906 20:19:54.894297 7 log.go:172] (0xc00045f760) (0xc00089caa0) Stream removed, broadcasting: 5 Sep 6 20:19:54.894: INFO: Exec stderr: "" Sep 6 20:19:54.894: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5fkgp PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 6 20:19:54.894: INFO: >>> kubeConfig: /root/.kube/config I0906 20:19:54.927250 7 log.go:172] (0xc0019f42c0) (0xc000c44280) Create stream I0906 20:19:54.927270 7 log.go:172] (0xc0019f42c0) (0xc000c44280) Stream added, broadcasting: 1 I0906 20:19:54.928572 7 log.go:172] (0xc0019f42c0) Reply frame received for 1 I0906 20:19:54.928596 7 log.go:172] (0xc0019f42c0) (0xc000c44320) Create stream I0906 20:19:54.928604 7 log.go:172] (0xc0019f42c0) (0xc000c44320) Stream added, broadcasting: 3 I0906 20:19:54.929326 7 log.go:172] (0xc0019f42c0) Reply frame received for 3 I0906 20:19:54.929348 7 log.go:172] (0xc0019f42c0) (0xc000c443c0) Create stream I0906 20:19:54.929356 7 log.go:172] (0xc0019f42c0) (0xc000c443c0) Stream added, broadcasting: 5 I0906 20:19:54.930121 7 log.go:172] (0xc0019f42c0) Reply frame received for 5 I0906 20:19:54.981758 7 log.go:172] (0xc0019f42c0) Data frame received for 3 I0906 20:19:54.981790 7 log.go:172] (0xc000c44320) (3) Data frame handling I0906 20:19:54.981800 7 log.go:172] (0xc000c44320) (3) Data frame sent I0906 20:19:54.981807 7 log.go:172] (0xc0019f42c0) Data frame received for 3 I0906 20:19:54.981814 7 log.go:172] (0xc000c44320) (3) Data frame handling I0906 20:19:54.981848 7 log.go:172] (0xc0019f42c0) Data frame received for 5 I0906 20:19:54.981858 7 log.go:172] (0xc000c443c0) (5) Data frame handling I0906 20:19:54.983158 7 log.go:172] (0xc0019f42c0) Data frame received for 1 I0906 20:19:54.983185 7 log.go:172] (0xc000c44280) (1) Data frame handling I0906 20:19:54.983260 7 log.go:172] (0xc000c44280) (1) Data frame sent I0906 20:19:54.983319 7 log.go:172] (0xc0019f42c0) (0xc000c44280) Stream removed, broadcasting: 1 I0906 20:19:54.983359 7 log.go:172] (0xc0019f42c0) Go away received I0906 20:19:54.983465 7 log.go:172] (0xc0019f42c0) (0xc000c44280) Stream removed, broadcasting: 1 I0906 20:19:54.983496 7 log.go:172] (0xc0019f42c0) (0xc000c44320) Stream removed, broadcasting: 3 I0906 20:19:54.983564 7 log.go:172] (0xc0019f42c0) (0xc000c443c0) Stream removed, broadcasting: 5 Sep 6 20:19:54.983: INFO: Exec stderr: "" Sep 6 20:19:54.983: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5fkgp PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 6 20:19:54.983: INFO: >>> kubeConfig: /root/.kube/config I0906 20:19:55.015482 7 log.go:172] (0xc0015042c0) (0xc001e58280) Create stream I0906 20:19:55.015504 7 log.go:172] (0xc0015042c0) (0xc001e58280) Stream added, broadcasting: 1 I0906 20:19:55.017015 7 log.go:172] (0xc0015042c0) Reply frame received for 1 I0906 20:19:55.017041 7 log.go:172] (0xc0015042c0) (0xc001e58320) Create stream I0906 20:19:55.017051 7 log.go:172] (0xc0015042c0) (0xc001e58320) Stream added, broadcasting: 3 I0906 20:19:55.017805 7 log.go:172] (0xc0015042c0) Reply frame received for 3 I0906 20:19:55.017830 7 log.go:172] (0xc0015042c0) (0xc001a68140) Create stream I0906 20:19:55.017841 7 log.go:172] (0xc0015042c0) (0xc001a68140) Stream added, broadcasting: 5 I0906 20:19:55.018644 7 log.go:172] (0xc0015042c0) Reply frame received for 5 I0906 20:19:55.075347 7 log.go:172] (0xc0015042c0) Data frame received for 5 I0906 20:19:55.075393 7 log.go:172] (0xc001a68140) (5) Data frame handling I0906 20:19:55.075432 7 log.go:172] (0xc0015042c0) Data frame received for 3 I0906 20:19:55.075448 7 log.go:172] (0xc001e58320) (3) Data frame handling I0906 20:19:55.075468 7 log.go:172] (0xc001e58320) (3) Data frame sent I0906 20:19:55.075513 7 log.go:172] (0xc0015042c0) Data frame received for 3 I0906 20:19:55.075532 7 log.go:172] (0xc001e58320) (3) Data frame handling I0906 20:19:55.077002 7 log.go:172] (0xc0015042c0) Data frame received for 1 I0906 20:19:55.077021 7 log.go:172] (0xc001e58280) (1) Data frame handling I0906 20:19:55.077033 7 log.go:172] (0xc001e58280) (1) Data frame sent I0906 20:19:55.077102 7 log.go:172] (0xc0015042c0) (0xc001e58280) Stream removed, broadcasting: 1 I0906 20:19:55.077175 7 log.go:172] (0xc0015042c0) (0xc001e58280) Stream removed, broadcasting: 1 I0906 20:19:55.077198 7 log.go:172] (0xc0015042c0) (0xc001e58320) Stream removed, broadcasting: 3 I0906 20:19:55.077212 7 log.go:172] (0xc0015042c0) (0xc001a68140) Stream removed, broadcasting: 5 Sep 6 20:19:55.077: INFO: Exec stderr: "" I0906 20:19:55.077240 7 log.go:172] (0xc0015042c0) Go away received Sep 6 20:19:55.077: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5fkgp PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 6 20:19:55.077: INFO: >>> kubeConfig: /root/.kube/config I0906 20:19:55.109303 7 log.go:172] (0xc0012462c0) (0xc001a683c0) Create stream I0906 20:19:55.109345 7 log.go:172] (0xc0012462c0) (0xc001a683c0) Stream added, broadcasting: 1 I0906 20:19:55.112153 7 log.go:172] (0xc0012462c0) Reply frame received for 1 I0906 20:19:55.112191 7 log.go:172] (0xc0012462c0) (0xc000c44460) Create stream I0906 20:19:55.112201 7 log.go:172] (0xc0012462c0) (0xc000c44460) Stream added, broadcasting: 3 I0906 20:19:55.115086 7 log.go:172] (0xc0012462c0) Reply frame received for 3 I0906 20:19:55.115115 7 log.go:172] (0xc0012462c0) (0xc001a68460) Create stream I0906 20:19:55.115131 7 log.go:172] (0xc0012462c0) (0xc001a68460) Stream added, broadcasting: 5 I0906 20:19:55.116241 7 log.go:172] (0xc0012462c0) Reply frame received for 5 I0906 20:19:55.169933 7 log.go:172] (0xc0012462c0) Data frame received for 3 I0906 20:19:55.170026 7 log.go:172] (0xc000c44460) (3) Data frame handling I0906 20:19:55.170063 7 log.go:172] (0xc000c44460) (3) Data frame sent I0906 20:19:55.170091 7 log.go:172] (0xc0012462c0) Data frame received for 3 I0906 20:19:55.170171 7 log.go:172] (0xc000c44460) (3) Data frame handling I0906 20:19:55.170212 7 log.go:172] (0xc0012462c0) Data frame received for 5 I0906 20:19:55.170236 7 log.go:172] (0xc001a68460) (5) Data frame handling I0906 20:19:55.171305 7 log.go:172] (0xc0012462c0) Data frame received for 1 I0906 20:19:55.171331 7 log.go:172] (0xc001a683c0) (1) Data frame handling I0906 20:19:55.171346 7 log.go:172] (0xc001a683c0) (1) Data frame sent I0906 20:19:55.171361 7 log.go:172] (0xc0012462c0) (0xc001a683c0) Stream removed, broadcasting: 1 I0906 20:19:55.171438 7 log.go:172] (0xc0012462c0) Go away received I0906 20:19:55.171499 7 log.go:172] (0xc0012462c0) (0xc001a683c0) Stream removed, broadcasting: 1 I0906 20:19:55.171548 7 log.go:172] (0xc0012462c0) (0xc000c44460) Stream removed, broadcasting: 3 I0906 20:19:55.171589 7 log.go:172] (0xc0012462c0) (0xc001a68460) Stream removed, broadcasting: 5 Sep 6 20:19:55.171: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:19:55.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-5fkgp" for this suite. Sep 6 20:20:57.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:20:57.202: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-5fkgp, resource: bindings, ignored listing per whitelist Sep 6 20:20:57.245: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-5fkgp deletion completed in 1m2.070079414s • [SLOW TEST:81.450 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:20:57.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 6 20:21:05.915: INFO: Successfully updated pod "pod-update-78f188aa-f07e-11ea-b72c-0242ac110008" STEP: verifying the updated pod is in kubernetes Sep 6 20:21:05.941: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:21:05.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bxxtz" for this suite. Sep 6 20:21:27.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:21:28.040: INFO: namespace: e2e-tests-pods-bxxtz, resource: bindings, ignored listing per whitelist Sep 6 20:21:28.040: INFO: namespace e2e-tests-pods-bxxtz deletion completed in 22.096643576s • [SLOW TEST:30.795 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:21:28.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 6 20:21:28.238: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b4bbf0b-f07e-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-pgrcd" to be "success or failure" Sep 6 20:21:28.245: INFO: Pod "downwardapi-volume-8b4bbf0b-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.820064ms Sep 6 20:21:30.255: INFO: Pod "downwardapi-volume-8b4bbf0b-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016523799s Sep 6 20:21:32.258: INFO: Pod "downwardapi-volume-8b4bbf0b-f07e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019902757s STEP: Saw pod success Sep 6 20:21:32.258: INFO: Pod "downwardapi-volume-8b4bbf0b-f07e-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:21:32.261: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8b4bbf0b-f07e-11ea-b72c-0242ac110008 container client-container: STEP: delete the pod Sep 6 20:21:32.280: INFO: Waiting for pod downwardapi-volume-8b4bbf0b-f07e-11ea-b72c-0242ac110008 to disappear Sep 6 20:21:32.285: INFO: Pod downwardapi-volume-8b4bbf0b-f07e-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:21:32.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pgrcd" for this suite. Sep 6 20:21:38.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:21:38.372: INFO: namespace: e2e-tests-projected-pgrcd, resource: bindings, ignored listing per whitelist Sep 6 20:21:38.426: INFO: namespace e2e-tests-projected-pgrcd deletion completed in 6.138300233s • [SLOW TEST:10.386 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:21:38.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 6 20:22:02.555: INFO: Container started at 2020-09-06 20:21:43 +0000 UTC, pod became ready at 2020-09-06 20:22:01 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:22:02.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-cwnvd" for this suite. Sep 6 20:22:34.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:22:34.604: INFO: namespace: e2e-tests-container-probe-cwnvd, resource: bindings, ignored listing per whitelist Sep 6 20:22:34.659: INFO: namespace e2e-tests-container-probe-cwnvd deletion completed in 32.10072041s • [SLOW TEST:56.233 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:22:34.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0906 20:22:46.209611 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 6 20:22:46.209: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:22:46.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2s6t7" for this suite. Sep 6 20:22:56.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:22:56.399: INFO: namespace: e2e-tests-gc-2s6t7, resource: bindings, ignored listing per whitelist Sep 6 20:22:56.450: INFO: namespace e2e-tests-gc-2s6t7 deletion completed in 10.237900436s • [SLOW TEST:21.790 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:22:56.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-c01a3359-f07e-11ea-b72c-0242ac110008 STEP: Creating a pod to test consume secrets Sep 6 20:22:56.896: INFO: Waiting up to 5m0s for pod "pod-secrets-c021888f-f07e-11ea-b72c-0242ac110008" in namespace "e2e-tests-secrets-mqfjv" to be "success or failure" Sep 6 20:22:56.898: INFO: Pod "pod-secrets-c021888f-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119057ms Sep 6 20:22:58.902: INFO: Pod "pod-secrets-c021888f-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005580608s Sep 6 20:23:01.732: INFO: Pod "pod-secrets-c021888f-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.835552899s Sep 6 20:23:06.222: INFO: Pod "pod-secrets-c021888f-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.325362791s Sep 6 20:23:08.416: INFO: Pod "pod-secrets-c021888f-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.519747124s Sep 6 20:23:10.451: INFO: Pod "pod-secrets-c021888f-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.555254423s Sep 6 20:23:12.455: INFO: Pod "pod-secrets-c021888f-f07e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.559209722s STEP: Saw pod success Sep 6 20:23:12.455: INFO: Pod "pod-secrets-c021888f-f07e-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:23:12.458: INFO: Trying to get logs from node hunter-worker pod pod-secrets-c021888f-f07e-11ea-b72c-0242ac110008 container secret-volume-test: STEP: delete the pod Sep 6 20:23:12.482: INFO: Waiting for pod pod-secrets-c021888f-f07e-11ea-b72c-0242ac110008 to disappear Sep 6 20:23:12.502: INFO: Pod pod-secrets-c021888f-f07e-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:23:12.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mqfjv" for this suite. Sep 6 20:23:19.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:23:19.165: INFO: namespace: e2e-tests-secrets-mqfjv, resource: bindings, ignored listing per whitelist Sep 6 20:23:19.220: INFO: namespace e2e-tests-secrets-mqfjv deletion completed in 6.715896408s • [SLOW TEST:22.770 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:23:19.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Sep 6 20:23:19.580: INFO: Waiting up to 5m0s for pod "pod-cdb16b20-f07e-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-cs4xt" to be "success or failure" Sep 6 20:23:19.619: INFO: Pod "pod-cdb16b20-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 38.559412ms Sep 6 20:23:21.622: INFO: Pod "pod-cdb16b20-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04235645s Sep 6 20:23:23.626: INFO: Pod "pod-cdb16b20-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046177684s Sep 6 20:23:25.630: INFO: Pod "pod-cdb16b20-f07e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049896403s STEP: Saw pod success Sep 6 20:23:25.630: INFO: Pod "pod-cdb16b20-f07e-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:23:25.633: INFO: Trying to get logs from node hunter-worker2 pod pod-cdb16b20-f07e-11ea-b72c-0242ac110008 container test-container: STEP: delete the pod Sep 6 20:23:25.655: INFO: Waiting for pod pod-cdb16b20-f07e-11ea-b72c-0242ac110008 to disappear Sep 6 20:23:25.659: INFO: Pod pod-cdb16b20-f07e-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:23:25.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cs4xt" for this suite. Sep 6 20:23:31.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:23:31.870: INFO: namespace: e2e-tests-emptydir-cs4xt, resource: bindings, ignored listing per whitelist Sep 6 20:23:31.916: INFO: namespace e2e-tests-emptydir-cs4xt deletion completed in 6.254661588s • [SLOW TEST:12.696 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:23:31.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Sep 6 20:23:32.008: INFO: Waiting up to 5m0s for pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-blg4s" to be "success or failure" Sep 6 20:23:32.019: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.832027ms Sep 6 20:23:34.024: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016020283s Sep 6 20:23:36.027: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019028516s Sep 6 20:23:38.223: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215444857s Sep 6 20:23:40.489: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.481417586s Sep 6 20:23:42.492: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.484508162s Sep 6 20:23:44.496: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.487911262s Sep 6 20:23:47.408: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.400101016s Sep 6 20:23:49.411: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.40316285s Sep 6 20:23:51.511: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.503179494s Sep 6 20:23:53.515: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.506874175s Sep 6 20:23:55.550: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 23.541911381s Sep 6 20:23:57.767: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 25.759416655s Sep 6 20:23:59.771: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.763054978s STEP: Saw pod success Sep 6 20:23:59.771: INFO: Pod "downward-api-d519bf98-f07e-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:23:59.775: INFO: Trying to get logs from node hunter-worker2 pod downward-api-d519bf98-f07e-11ea-b72c-0242ac110008 container dapi-container: STEP: delete the pod Sep 6 20:23:59.812: INFO: Waiting for pod downward-api-d519bf98-f07e-11ea-b72c-0242ac110008 to disappear Sep 6 20:23:59.822: INFO: Pod downward-api-d519bf98-f07e-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:23:59.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-blg4s" for this suite. Sep 6 20:24:07.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:24:07.880: INFO: namespace: e2e-tests-downward-api-blg4s, resource: bindings, ignored listing per whitelist Sep 6 20:24:07.934: INFO: namespace e2e-tests-downward-api-blg4s deletion completed in 8.109577444s • [SLOW TEST:36.018 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:24:07.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 6 20:24:08.053: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:24:09.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-rtfrk" for this suite. Sep 6 20:24:15.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:24:15.191: INFO: namespace: e2e-tests-custom-resource-definition-rtfrk, resource: bindings, ignored listing per whitelist Sep 6 20:24:15.222: INFO: namespace e2e-tests-custom-resource-definition-rtfrk deletion completed in 6.096410617s • [SLOW TEST:7.288 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:24:15.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Sep 6 20:24:15.397: INFO: Waiting up to 5m0s for pod "pod-eef2b215-f07e-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-dpbhq" to be "success or failure" Sep 6 20:24:15.399: INFO: Pod "pod-eef2b215-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 1.462799ms Sep 6 20:24:17.446: INFO: Pod "pod-eef2b215-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048142696s Sep 6 20:24:19.494: INFO: Pod "pod-eef2b215-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096339286s Sep 6 20:24:21.497: INFO: Pod "pod-eef2b215-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09909228s Sep 6 20:24:23.699: INFO: Pod "pod-eef2b215-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.301516073s Sep 6 20:24:25.703: INFO: Pod "pod-eef2b215-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.305052313s Sep 6 20:24:27.706: INFO: Pod "pod-eef2b215-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.308453475s Sep 6 20:24:30.179: INFO: Pod "pod-eef2b215-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.781033968s Sep 6 20:24:32.183: INFO: Pod "pod-eef2b215-f07e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.78506658s STEP: Saw pod success Sep 6 20:24:32.183: INFO: Pod "pod-eef2b215-f07e-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:24:32.185: INFO: Trying to get logs from node hunter-worker pod pod-eef2b215-f07e-11ea-b72c-0242ac110008 container test-container: STEP: delete the pod Sep 6 20:24:32.957: INFO: Waiting for pod pod-eef2b215-f07e-11ea-b72c-0242ac110008 to disappear Sep 6 20:24:32.961: INFO: Pod pod-eef2b215-f07e-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:24:32.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dpbhq" for this suite. Sep 6 20:24:39.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:24:39.563: INFO: namespace: e2e-tests-emptydir-dpbhq, resource: bindings, ignored listing per whitelist Sep 6 20:24:39.564: INFO: namespace e2e-tests-emptydir-dpbhq deletion completed in 6.541961683s • [SLOW TEST:24.342 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:24:39.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Sep 6 20:24:39.838: INFO: Waiting up to 5m0s for pod "var-expansion-fd812143-f07e-11ea-b72c-0242ac110008" in namespace "e2e-tests-var-expansion-kdp2l" to be "success or failure" Sep 6 20:24:39.847: INFO: Pod "var-expansion-fd812143-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.654714ms Sep 6 20:24:42.975: INFO: Pod "var-expansion-fd812143-f07e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.136997734s Sep 6 20:24:44.979: INFO: Pod "var-expansion-fd812143-f07e-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 5.140945825s Sep 6 20:24:46.983: INFO: Pod "var-expansion-fd812143-f07e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.144619419s STEP: Saw pod success Sep 6 20:24:46.983: INFO: Pod "var-expansion-fd812143-f07e-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:24:46.986: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-fd812143-f07e-11ea-b72c-0242ac110008 container dapi-container: STEP: delete the pod Sep 6 20:24:47.076: INFO: Waiting for pod var-expansion-fd812143-f07e-11ea-b72c-0242ac110008 to disappear Sep 6 20:24:47.087: INFO: Pod var-expansion-fd812143-f07e-11ea-b72c-0242ac110008 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:24:47.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-kdp2l" for this suite. Sep 6 20:24:53.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:24:53.132: INFO: namespace: e2e-tests-var-expansion-kdp2l, resource: bindings, ignored listing per whitelist Sep 6 20:24:53.182: INFO: namespace e2e-tests-var-expansion-kdp2l deletion completed in 6.091073669s • [SLOW TEST:13.617 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:24:53.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 6 20:24:53.299: INFO: Waiting up to 5m0s for pod "pod-058dc40a-f07f-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-nr4rg" to be "success or failure" Sep 6 20:24:53.303: INFO: Pod "pod-058dc40a-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055613ms Sep 6 20:24:55.808: INFO: Pod "pod-058dc40a-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.50896476s Sep 6 20:24:57.811: INFO: Pod "pod-058dc40a-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.512510095s Sep 6 20:24:59.815: INFO: Pod "pod-058dc40a-f07f-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.516455221s STEP: Saw pod success Sep 6 20:24:59.815: INFO: Pod "pod-058dc40a-f07f-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:24:59.818: INFO: Trying to get logs from node hunter-worker2 pod pod-058dc40a-f07f-11ea-b72c-0242ac110008 container test-container: STEP: delete the pod Sep 6 20:24:59.921: INFO: Waiting for pod pod-058dc40a-f07f-11ea-b72c-0242ac110008 to disappear Sep 6 20:24:59.962: INFO: Pod pod-058dc40a-f07f-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:24:59.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nr4rg" for this suite. Sep 6 20:25:06.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:25:06.088: INFO: namespace: e2e-tests-emptydir-nr4rg, resource: bindings, ignored listing per whitelist Sep 6 20:25:06.150: INFO: namespace e2e-tests-emptydir-nr4rg deletion completed in 6.184677444s • [SLOW TEST:12.968 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:25:06.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Sep 6 20:25:07.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cnmtg' Sep 6 20:25:09.747: INFO: stderr: "" Sep 6 20:25:09.747: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Sep 6 20:25:10.751: INFO: Selector matched 1 pods for map[app:redis] Sep 6 20:25:10.751: INFO: Found 0 / 1 Sep 6 20:25:11.861: INFO: Selector matched 1 pods for map[app:redis] Sep 6 20:25:11.861: INFO: Found 0 / 1 Sep 6 20:25:12.886: INFO: Selector matched 1 pods for map[app:redis] Sep 6 20:25:12.886: INFO: Found 0 / 1 Sep 6 20:25:13.753: INFO: Selector matched 1 pods for map[app:redis] Sep 6 20:25:13.753: INFO: Found 0 / 1 Sep 6 20:25:14.751: INFO: Selector matched 1 pods for map[app:redis] Sep 6 20:25:14.752: INFO: Found 0 / 1 Sep 6 20:25:15.751: INFO: Selector matched 1 pods for map[app:redis] Sep 6 20:25:15.751: INFO: Found 1 / 1 Sep 6 20:25:15.751: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Sep 6 20:25:15.754: INFO: Selector matched 1 pods for map[app:redis] Sep 6 20:25:15.754: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 6 20:25:15.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-kv5rn --namespace=e2e-tests-kubectl-cnmtg -p {"metadata":{"annotations":{"x":"y"}}}' Sep 6 20:25:15.851: INFO: stderr: "" Sep 6 20:25:15.851: INFO: stdout: "pod/redis-master-kv5rn patched\n" STEP: checking annotations Sep 6 20:25:15.854: INFO: Selector matched 1 pods for map[app:redis] Sep 6 20:25:15.854: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:25:15.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cnmtg" for this suite. Sep 6 20:25:40.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:25:41.017: INFO: namespace: e2e-tests-kubectl-cnmtg, resource: bindings, ignored listing per whitelist Sep 6 20:25:41.019: INFO: namespace e2e-tests-kubectl-cnmtg deletion completed in 25.162533839s • [SLOW TEST:34.869 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:25:41.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Sep 6 20:25:41.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tqmwp' Sep 6 20:25:53.773: INFO: stderr: "" Sep 6 20:25:53.773: INFO: stdout: "pod/pause created\n" Sep 6 20:25:53.773: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Sep 6 20:25:53.773: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-tqmwp" to be "running and ready" Sep 6 20:25:53.783: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.75963ms Sep 6 20:25:55.942: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168482995s Sep 6 20:25:57.945: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.17204408s Sep 6 20:25:57.946: INFO: Pod "pause" satisfied condition "running and ready" Sep 6 20:25:57.946: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Sep 6 20:25:57.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-tqmwp' Sep 6 20:25:58.044: INFO: stderr: "" Sep 6 20:25:58.044: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Sep 6 20:25:58.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tqmwp' Sep 6 20:25:58.260: INFO: stderr: "" Sep 6 20:25:58.260: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Sep 6 20:25:58.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-tqmwp' Sep 6 20:25:58.445: INFO: stderr: "" Sep 6 20:25:58.445: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Sep 6 20:25:58.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tqmwp' Sep 6 20:25:58.539: INFO: stderr: "" Sep 6 20:25:58.539: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Sep 6 20:25:58.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tqmwp' Sep 6 20:25:58.660: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 6 20:25:58.660: INFO: stdout: "pod \"pause\" force deleted\n" Sep 6 20:25:58.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-tqmwp' Sep 6 20:25:58.762: INFO: stderr: "No resources found.\n" Sep 6 20:25:58.762: INFO: stdout: "" Sep 6 20:25:58.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-tqmwp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 6 20:25:58.848: INFO: stderr: "" Sep 6 20:25:58.848: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:25:58.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tqmwp" for this suite. Sep 6 20:26:04.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:26:04.927: INFO: namespace: e2e-tests-kubectl-tqmwp, resource: bindings, ignored listing per whitelist Sep 6 20:26:04.973: INFO: namespace e2e-tests-kubectl-tqmwp deletion completed in 6.122213312s • [SLOW TEST:23.954 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:26:04.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 6 20:26:05.109: INFO: Waiting up to 5m0s for pod "pod-30587a70-f07f-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-4dlvr" to be "success or failure" Sep 6 20:26:05.143: INFO: Pod "pod-30587a70-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 34.615162ms Sep 6 20:26:07.496: INFO: Pod "pod-30587a70-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387794188s Sep 6 20:26:09.499: INFO: Pod "pod-30587a70-f07f-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.390882537s STEP: Saw pod success Sep 6 20:26:09.499: INFO: Pod "pod-30587a70-f07f-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:26:09.502: INFO: Trying to get logs from node hunter-worker2 pod pod-30587a70-f07f-11ea-b72c-0242ac110008 container test-container: STEP: delete the pod Sep 6 20:26:09.515: INFO: Waiting for pod pod-30587a70-f07f-11ea-b72c-0242ac110008 to disappear Sep 6 20:26:09.520: INFO: Pod pod-30587a70-f07f-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:26:09.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4dlvr" for this suite. Sep 6 20:26:15.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:26:15.602: INFO: namespace: e2e-tests-emptydir-4dlvr, resource: bindings, ignored listing per whitelist Sep 6 20:26:15.675: INFO: namespace e2e-tests-emptydir-4dlvr deletion completed in 6.152629422s • [SLOW TEST:10.701 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:26:15.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-36b5cc89-f07f-11ea-b72c-0242ac110008 STEP: Creating a pod to test consume configMaps Sep 6 20:26:15.840: INFO: Waiting up to 5m0s for pod "pod-configmaps-36beb8d1-f07f-11ea-b72c-0242ac110008" in namespace "e2e-tests-configmap-rzbwd" to be "success or failure" Sep 6 20:26:15.867: INFO: Pod "pod-configmaps-36beb8d1-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 27.042759ms Sep 6 20:26:18.084: INFO: Pod "pod-configmaps-36beb8d1-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244487604s Sep 6 20:26:20.088: INFO: Pod "pod-configmaps-36beb8d1-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247938043s Sep 6 20:26:22.091: INFO: Pod "pod-configmaps-36beb8d1-f07f-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.251662581s STEP: Saw pod success Sep 6 20:26:22.092: INFO: Pod "pod-configmaps-36beb8d1-f07f-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:26:22.094: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-36beb8d1-f07f-11ea-b72c-0242ac110008 container configmap-volume-test: STEP: delete the pod Sep 6 20:26:22.295: INFO: Waiting for pod pod-configmaps-36beb8d1-f07f-11ea-b72c-0242ac110008 to disappear Sep 6 20:26:22.311: INFO: Pod pod-configmaps-36beb8d1-f07f-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:26:22.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rzbwd" for this suite. Sep 6 20:26:28.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:26:28.375: INFO: namespace: e2e-tests-configmap-rzbwd, resource: bindings, ignored listing per whitelist Sep 6 20:26:28.389: INFO: namespace e2e-tests-configmap-rzbwd deletion completed in 6.075368545s • [SLOW TEST:12.714 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:26:28.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-3e4b7dda-f07f-11ea-b72c-0242ac110008 STEP: Creating a pod to test consume configMaps Sep 6 20:26:28.516: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3e4e8d22-f07f-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-2htkz" to be "success or failure" Sep 6 20:26:28.526: INFO: Pod "pod-projected-configmaps-3e4e8d22-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.498069ms Sep 6 20:26:30.934: INFO: Pod "pod-projected-configmaps-3e4e8d22-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.417955383s Sep 6 20:26:32.937: INFO: Pod "pod-projected-configmaps-3e4e8d22-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.421082479s Sep 6 20:26:35.132: INFO: Pod "pod-projected-configmaps-3e4e8d22-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.616103308s Sep 6 20:26:37.136: INFO: Pod "pod-projected-configmaps-3e4e8d22-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.619914418s Sep 6 20:26:39.249: INFO: Pod "pod-projected-configmaps-3e4e8d22-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.732916261s Sep 6 20:26:41.252: INFO: Pod "pod-projected-configmaps-3e4e8d22-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.736156448s Sep 6 20:26:43.267: INFO: Pod "pod-projected-configmaps-3e4e8d22-f07f-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.751124202s STEP: Saw pod success Sep 6 20:26:43.267: INFO: Pod "pod-projected-configmaps-3e4e8d22-f07f-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:26:43.270: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-3e4e8d22-f07f-11ea-b72c-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Sep 6 20:26:43.288: INFO: Waiting for pod pod-projected-configmaps-3e4e8d22-f07f-11ea-b72c-0242ac110008 to disappear Sep 6 20:26:43.308: INFO: Pod pod-projected-configmaps-3e4e8d22-f07f-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:26:43.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2htkz" for this suite. Sep 6 20:26:49.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:26:49.337: INFO: namespace: e2e-tests-projected-2htkz, resource: bindings, ignored listing per whitelist Sep 6 20:26:49.382: INFO: namespace e2e-tests-projected-2htkz deletion completed in 6.071115609s • [SLOW TEST:20.993 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:26:49.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 6 20:26:49.510: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ad1ba67-f07f-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-qmn49" to be "success or failure" Sep 6 20:26:49.514: INFO: Pod "downwardapi-volume-4ad1ba67-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.555355ms Sep 6 20:26:51.518: INFO: Pod "downwardapi-volume-4ad1ba67-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008028293s Sep 6 20:26:53.521: INFO: Pod "downwardapi-volume-4ad1ba67-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011487252s Sep 6 20:26:55.718: INFO: Pod "downwardapi-volume-4ad1ba67-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208285031s Sep 6 20:26:58.563: INFO: Pod "downwardapi-volume-4ad1ba67-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.053135935s Sep 6 20:27:00.567: INFO: Pod "downwardapi-volume-4ad1ba67-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.056937364s Sep 6 20:27:02.898: INFO: Pod "downwardapi-volume-4ad1ba67-f07f-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 13.388026434s Sep 6 20:27:05.455: INFO: Pod "downwardapi-volume-4ad1ba67-f07f-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.945367042s STEP: Saw pod success Sep 6 20:27:05.455: INFO: Pod "downwardapi-volume-4ad1ba67-f07f-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:27:05.623: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4ad1ba67-f07f-11ea-b72c-0242ac110008 container client-container: STEP: delete the pod Sep 6 20:27:05.863: INFO: Waiting for pod downwardapi-volume-4ad1ba67-f07f-11ea-b72c-0242ac110008 to disappear Sep 6 20:27:05.947: INFO: Pod downwardapi-volume-4ad1ba67-f07f-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:27:05.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qmn49" for this suite. Sep 6 20:27:12.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:27:12.055: INFO: namespace: e2e-tests-downward-api-qmn49, resource: bindings, ignored listing per whitelist Sep 6 20:27:12.106: INFO: namespace e2e-tests-downward-api-qmn49 deletion completed in 6.155556276s • [SLOW TEST:22.723 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:27:12.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 6 20:27:12.216: INFO: Waiting up to 5m0s for pod "pod-5859b9be-f07f-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-9nxjr" to be "success or failure" Sep 6 20:27:12.249: INFO: Pod "pod-5859b9be-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 32.627148ms Sep 6 20:27:14.325: INFO: Pod "pod-5859b9be-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108280604s Sep 6 20:27:16.327: INFO: Pod "pod-5859b9be-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110572635s Sep 6 20:27:18.455: INFO: Pod "pod-5859b9be-f07f-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 6.238355311s Sep 6 20:27:20.457: INFO: Pod "pod-5859b9be-f07f-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.240777179s STEP: Saw pod success Sep 6 20:27:20.457: INFO: Pod "pod-5859b9be-f07f-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:27:20.459: INFO: Trying to get logs from node hunter-worker2 pod pod-5859b9be-f07f-11ea-b72c-0242ac110008 container test-container: STEP: delete the pod Sep 6 20:27:20.677: INFO: Waiting for pod pod-5859b9be-f07f-11ea-b72c-0242ac110008 to disappear Sep 6 20:27:20.850: INFO: Pod pod-5859b9be-f07f-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:27:20.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9nxjr" for this suite. Sep 6 20:27:26.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:27:26.873: INFO: namespace: e2e-tests-emptydir-9nxjr, resource: bindings, ignored listing per whitelist Sep 6 20:27:26.925: INFO: namespace e2e-tests-emptydir-9nxjr deletion completed in 6.072809345s • [SLOW TEST:14.819 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:27:26.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-m66l STEP: Creating a pod to test atomic-volume-subpath Sep 6 20:27:27.187: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-m66l" in namespace "e2e-tests-subpath-bn5lp" to be "success or failure" Sep 6 20:27:27.437: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Pending", Reason="", readiness=false. Elapsed: 250.040665ms Sep 6 20:27:29.563: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375449545s Sep 6 20:27:31.566: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378334383s Sep 6 20:27:33.569: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.382225119s Sep 6 20:27:35.572: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Running", Reason="", readiness=false. Elapsed: 8.385202887s Sep 6 20:27:37.576: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Running", Reason="", readiness=false. Elapsed: 10.388690763s Sep 6 20:27:39.579: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Running", Reason="", readiness=false. Elapsed: 12.391981897s Sep 6 20:27:41.582: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Running", Reason="", readiness=false. Elapsed: 14.394685941s Sep 6 20:27:43.585: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Running", Reason="", readiness=false. Elapsed: 16.39793307s Sep 6 20:27:45.587: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Running", Reason="", readiness=false. Elapsed: 18.400215721s Sep 6 20:27:47.590: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Running", Reason="", readiness=false. Elapsed: 20.402611185s Sep 6 20:27:49.593: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Running", Reason="", readiness=false. Elapsed: 22.405489078s Sep 6 20:27:51.596: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Running", Reason="", readiness=false. Elapsed: 24.408733099s Sep 6 20:27:53.599: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Running", Reason="", readiness=false. Elapsed: 26.411746487s Sep 6 20:27:55.910: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Running", Reason="", readiness=false. Elapsed: 28.722698979s Sep 6 20:27:57.913: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Running", Reason="", readiness=false. Elapsed: 30.725359417s Sep 6 20:27:59.915: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Running", Reason="", readiness=false. Elapsed: 32.727865367s Sep 6 20:28:01.919: INFO: Pod "pod-subpath-test-configmap-m66l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.731381696s STEP: Saw pod success Sep 6 20:28:01.919: INFO: Pod "pod-subpath-test-configmap-m66l" satisfied condition "success or failure" Sep 6 20:28:02.055: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-m66l container test-container-subpath-configmap-m66l: STEP: delete the pod Sep 6 20:28:02.335: INFO: Waiting for pod pod-subpath-test-configmap-m66l to disappear Sep 6 20:28:02.485: INFO: Pod pod-subpath-test-configmap-m66l no longer exists STEP: Deleting pod pod-subpath-test-configmap-m66l Sep 6 20:28:02.485: INFO: Deleting pod "pod-subpath-test-configmap-m66l" in namespace "e2e-tests-subpath-bn5lp" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:28:02.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-bn5lp" for this suite. Sep 6 20:28:08.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:28:08.550: INFO: namespace: e2e-tests-subpath-bn5lp, resource: bindings, ignored listing per whitelist Sep 6 20:28:08.562: INFO: namespace e2e-tests-subpath-bn5lp deletion completed in 6.071708769s • [SLOW TEST:41.636 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:28:08.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 6 20:28:08.764: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a08463f-f07f-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-r7lzz" to be "success or failure" Sep 6 20:28:08.875: INFO: Pod "downwardapi-volume-7a08463f-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 110.77971ms Sep 6 20:28:11.437: INFO: Pod "downwardapi-volume-7a08463f-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.672623863s Sep 6 20:28:13.440: INFO: Pod "downwardapi-volume-7a08463f-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.676387376s Sep 6 20:28:15.474: INFO: Pod "downwardapi-volume-7a08463f-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.709711103s Sep 6 20:28:17.522: INFO: Pod "downwardapi-volume-7a08463f-f07f-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.758287991s Sep 6 20:28:19.681: INFO: Pod "downwardapi-volume-7a08463f-f07f-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 10.916863302s Sep 6 20:28:21.833: INFO: Pod "downwardapi-volume-7a08463f-f07f-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.068722073s STEP: Saw pod success Sep 6 20:28:21.833: INFO: Pod "downwardapi-volume-7a08463f-f07f-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:28:21.834: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7a08463f-f07f-11ea-b72c-0242ac110008 container client-container: STEP: delete the pod Sep 6 20:28:21.966: INFO: Waiting for pod downwardapi-volume-7a08463f-f07f-11ea-b72c-0242ac110008 to disappear Sep 6 20:28:22.015: INFO: Pod downwardapi-volume-7a08463f-f07f-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:28:22.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r7lzz" for this suite. Sep 6 20:28:28.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:28:28.169: INFO: namespace: e2e-tests-projected-r7lzz, resource: bindings, ignored listing per whitelist Sep 6 20:28:28.174: INFO: namespace e2e-tests-projected-r7lzz deletion completed in 6.155506855s • [SLOW TEST:19.612 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:28:28.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-z96s STEP: Creating a pod to test atomic-volume-subpath Sep 6 20:28:28.309: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-z96s" in namespace "e2e-tests-subpath-hxjdz" to be "success or failure" Sep 6 20:28:28.314: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 5.75521ms Sep 6 20:28:31.439: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 3.130044598s Sep 6 20:28:33.515: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 5.206429529s Sep 6 20:28:35.700: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 7.391846475s Sep 6 20:28:37.704: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 9.395033925s Sep 6 20:28:39.731: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 11.422366383s Sep 6 20:28:42.384: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.075819465s Sep 6 20:28:44.470: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 16.161272678s Sep 6 20:28:46.474: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 18.165165052s Sep 6 20:28:48.557: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 20.248236124s Sep 6 20:28:51.076: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 22.767828977s Sep 6 20:28:53.080: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 24.771170328s Sep 6 20:28:55.540: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 27.231608s Sep 6 20:29:05.037: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 36.727962009s Sep 6 20:29:07.187: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 38.878356653s Sep 6 20:29:09.190: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 40.881870661s Sep 6 20:29:11.195: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 42.886336787s Sep 6 20:29:13.199: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 44.890136652s Sep 6 20:29:15.202: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 46.893552887s Sep 6 20:29:17.205: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 48.896826267s Sep 6 20:29:19.209: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Pending", Reason="", readiness=false. Elapsed: 50.900244788s Sep 6 20:29:21.212: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Running", Reason="", readiness=false. Elapsed: 52.903803702s Sep 6 20:29:23.216: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Running", Reason="", readiness=false. Elapsed: 54.907081487s Sep 6 20:29:25.219: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Running", Reason="", readiness=false. Elapsed: 56.910522616s Sep 6 20:29:27.223: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Running", Reason="", readiness=false. Elapsed: 58.914435287s Sep 6 20:29:29.227: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.918046504s Sep 6 20:29:31.230: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.921567896s Sep 6 20:29:33.234: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.925423381s Sep 6 20:29:35.238: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.929662295s Sep 6 20:29:37.334: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Running", Reason="", readiness=false. Elapsed: 1m9.025414977s Sep 6 20:29:39.337: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Running", Reason="", readiness=false. Elapsed: 1m11.028356049s Sep 6 20:29:41.341: INFO: Pod "pod-subpath-test-projected-z96s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m13.032106472s STEP: Saw pod success Sep 6 20:29:41.341: INFO: Pod "pod-subpath-test-projected-z96s" satisfied condition "success or failure" Sep 6 20:29:41.343: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-z96s container test-container-subpath-projected-z96s: STEP: delete the pod Sep 6 20:29:41.439: INFO: Waiting for pod pod-subpath-test-projected-z96s to disappear Sep 6 20:29:41.538: INFO: Pod pod-subpath-test-projected-z96s no longer exists STEP: Deleting pod pod-subpath-test-projected-z96s Sep 6 20:29:41.538: INFO: Deleting pod "pod-subpath-test-projected-z96s" in namespace "e2e-tests-subpath-hxjdz" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:29:41.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-hxjdz" for this suite. Sep 6 20:29:47.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:29:47.663: INFO: namespace: e2e-tests-subpath-hxjdz, resource: bindings, ignored listing per whitelist Sep 6 20:29:47.670: INFO: namespace e2e-tests-subpath-hxjdz deletion completed in 6.084004428s • [SLOW TEST:79.496 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:29:47.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Sep 6 20:30:14.802: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:30:15.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-b7824" for this suite. Sep 6 20:31:05.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:31:05.898: INFO: namespace: e2e-tests-replicaset-b7824, resource: bindings, ignored listing per whitelist Sep 6 20:31:05.925: INFO: namespace e2e-tests-replicaset-b7824 deletion completed in 50.09860786s • [SLOW TEST:78.255 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:31:05.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-jt6fh [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-jt6fh STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-jt6fh Sep 6 20:31:06.072: INFO: Found 0 stateful pods, waiting for 1 Sep 6 20:31:16.076: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Sep 6 20:31:16.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 6 20:31:16.352: INFO: stderr: "I0906 20:31:16.196165 421 log.go:172] (0xc000138630) (0xc00066f360) Create stream\nI0906 20:31:16.196223 421 log.go:172] (0xc000138630) (0xc00066f360) Stream added, broadcasting: 1\nI0906 20:31:16.198927 421 log.go:172] (0xc000138630) Reply frame received for 1\nI0906 20:31:16.198973 421 log.go:172] (0xc000138630) (0xc00066f400) Create stream\nI0906 20:31:16.198983 421 log.go:172] (0xc000138630) (0xc00066f400) Stream added, broadcasting: 3\nI0906 20:31:16.199920 421 log.go:172] (0xc000138630) Reply frame received for 3\nI0906 20:31:16.199956 421 log.go:172] (0xc000138630) (0xc00011c000) Create stream\nI0906 20:31:16.199966 421 log.go:172] (0xc000138630) (0xc00011c000) Stream added, broadcasting: 5\nI0906 20:31:16.200670 421 log.go:172] (0xc000138630) Reply frame received for 5\nI0906 20:31:16.348438 421 log.go:172] (0xc000138630) Data frame received for 3\nI0906 20:31:16.348460 421 log.go:172] (0xc00066f400) (3) Data frame handling\nI0906 20:31:16.348476 421 log.go:172] (0xc00066f400) (3) Data frame sent\nI0906 20:31:16.348607 421 log.go:172] (0xc000138630) Data frame received for 5\nI0906 20:31:16.348631 421 log.go:172] (0xc000138630) Data frame received for 3\nI0906 20:31:16.348644 421 log.go:172] (0xc00066f400) (3) Data frame handling\nI0906 20:31:16.348656 421 log.go:172] (0xc00011c000) (5) Data frame handling\nI0906 20:31:16.350076 421 log.go:172] (0xc000138630) Data frame received for 1\nI0906 20:31:16.350091 421 log.go:172] (0xc00066f360) (1) Data frame handling\nI0906 20:31:16.350109 421 log.go:172] (0xc00066f360) (1) Data frame sent\nI0906 20:31:16.350146 421 log.go:172] (0xc000138630) (0xc00066f360) Stream removed, broadcasting: 1\nI0906 20:31:16.350160 421 log.go:172] (0xc000138630) Go away received\nI0906 20:31:16.350290 421 log.go:172] (0xc000138630) (0xc00066f360) Stream removed, broadcasting: 1\nI0906 20:31:16.350304 421 log.go:172] (0xc000138630) (0xc00066f400) Stream removed, broadcasting: 3\nI0906 20:31:16.350311 421 log.go:172] (0xc000138630) (0xc00011c000) Stream removed, broadcasting: 5\n" Sep 6 20:31:16.352: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 6 20:31:16.352: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 6 20:31:16.355: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 6 20:31:26.358: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 6 20:31:26.358: INFO: Waiting for statefulset status.replicas updated to 0 Sep 6 20:31:26.370: INFO: POD NODE PHASE GRACE CONDITIONS Sep 6 20:31:26.370: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC }] Sep 6 20:31:26.370: INFO: Sep 6 20:31:26.370: INFO: StatefulSet ss has not reached scale 3, at 1 Sep 6 20:31:27.374: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993634953s Sep 6 20:31:29.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.99002638s Sep 6 20:31:31.098: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.906256249s Sep 6 20:31:32.451: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.265826831s Sep 6 20:31:33.455: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.913317032s Sep 6 20:31:35.193: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.909254366s Sep 6 20:31:36.197: INFO: Verifying statefulset ss doesn't scale past 3 for another 170.819875ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-jt6fh Sep 6 20:31:37.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:31:37.557: INFO: stderr: "I0906 20:31:37.460894 443 log.go:172] (0xc00071a370) (0xc000758640) Create stream\nI0906 20:31:37.460952 443 log.go:172] (0xc00071a370) (0xc000758640) Stream added, broadcasting: 1\nI0906 20:31:37.463084 443 log.go:172] (0xc00071a370) Reply frame received for 1\nI0906 20:31:37.463131 443 log.go:172] (0xc00071a370) (0xc0007eee60) Create stream\nI0906 20:31:37.463147 443 log.go:172] (0xc00071a370) (0xc0007eee60) Stream added, broadcasting: 3\nI0906 20:31:37.464095 443 log.go:172] (0xc00071a370) Reply frame received for 3\nI0906 20:31:37.464129 443 log.go:172] (0xc00071a370) (0xc00068e000) Create stream\nI0906 20:31:37.464140 443 log.go:172] (0xc00071a370) (0xc00068e000) Stream added, broadcasting: 5\nI0906 20:31:37.465374 443 log.go:172] (0xc00071a370) Reply frame received for 5\nI0906 20:31:37.552863 443 log.go:172] (0xc00071a370) Data frame received for 5\nI0906 20:31:37.552909 443 log.go:172] (0xc00068e000) (5) Data frame handling\nI0906 20:31:37.552940 443 log.go:172] (0xc00071a370) Data frame received for 3\nI0906 20:31:37.552954 443 log.go:172] (0xc0007eee60) (3) Data frame handling\nI0906 20:31:37.552968 443 log.go:172] (0xc0007eee60) (3) Data frame sent\nI0906 20:31:37.552980 443 log.go:172] (0xc00071a370) Data frame received for 3\nI0906 20:31:37.552987 443 log.go:172] (0xc0007eee60) (3) Data frame handling\nI0906 20:31:37.553949 443 log.go:172] (0xc00071a370) Data frame received for 1\nI0906 20:31:37.553974 443 log.go:172] (0xc000758640) (1) Data frame handling\nI0906 20:31:37.553987 443 log.go:172] (0xc000758640) (1) Data frame sent\nI0906 20:31:37.554006 443 log.go:172] (0xc00071a370) (0xc000758640) Stream removed, broadcasting: 1\nI0906 20:31:37.554026 443 log.go:172] (0xc00071a370) Go away received\nI0906 20:31:37.554233 443 log.go:172] (0xc00071a370) (0xc000758640) Stream removed, broadcasting: 1\nI0906 20:31:37.554253 443 log.go:172] (0xc00071a370) (0xc0007eee60) Stream removed, broadcasting: 3\nI0906 20:31:37.554270 443 log.go:172] (0xc00071a370) (0xc00068e000) Stream removed, broadcasting: 5\n" Sep 6 20:31:37.557: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 6 20:31:37.557: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 6 20:31:37.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:31:39.703: INFO: rc: 1 Sep 6 20:31:39.703: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000b2f260 exit status 1 true [0xc0004c13d0 0xc0004c1400 0xc0004c1438] [0xc0004c13d0 0xc0004c1400 0xc0004c1438] [0xc0004c13f0 0xc0004c1420] [0x935700 0x935700] 0xc0018c0ba0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Sep 6 20:31:49.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:31:49.901: INFO: stderr: "I0906 20:31:49.816980 488 log.go:172] (0xc0006e40b0) (0xc0007025a0) Create stream\nI0906 20:31:49.817023 488 log.go:172] (0xc0006e40b0) (0xc0007025a0) Stream added, broadcasting: 1\nI0906 20:31:49.818818 488 log.go:172] (0xc0006e40b0) Reply frame received for 1\nI0906 20:31:49.818853 488 log.go:172] (0xc0006e40b0) (0xc0004f8b40) Create stream\nI0906 20:31:49.818865 488 log.go:172] (0xc0006e40b0) (0xc0004f8b40) Stream added, broadcasting: 3\nI0906 20:31:49.819594 488 log.go:172] (0xc0006e40b0) Reply frame received for 3\nI0906 20:31:49.819626 488 log.go:172] (0xc0006e40b0) (0xc0004f8c80) Create stream\nI0906 20:31:49.819637 488 log.go:172] (0xc0006e40b0) (0xc0004f8c80) Stream added, broadcasting: 5\nI0906 20:31:49.820602 488 log.go:172] (0xc0006e40b0) Reply frame received for 5\nI0906 20:31:49.896183 488 log.go:172] (0xc0006e40b0) Data frame received for 3\nI0906 20:31:49.896225 488 log.go:172] (0xc0004f8b40) (3) Data frame handling\nI0906 20:31:49.896256 488 log.go:172] (0xc0004f8b40) (3) Data frame sent\nI0906 20:31:49.896279 488 log.go:172] (0xc0006e40b0) Data frame received for 3\nI0906 20:31:49.896295 488 log.go:172] (0xc0004f8b40) (3) Data frame handling\nI0906 20:31:49.896724 488 log.go:172] (0xc0006e40b0) Data frame received for 5\nI0906 20:31:49.896740 488 log.go:172] (0xc0004f8c80) (5) Data frame handling\nI0906 20:31:49.896752 488 log.go:172] (0xc0004f8c80) (5) Data frame sent\nI0906 20:31:49.896758 488 log.go:172] (0xc0006e40b0) Data frame received for 5\nI0906 20:31:49.896763 488 log.go:172] (0xc0004f8c80) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0906 20:31:49.898258 488 log.go:172] (0xc0006e40b0) Data frame received for 1\nI0906 20:31:49.898272 488 log.go:172] (0xc0007025a0) (1) Data frame handling\nI0906 20:31:49.898283 488 log.go:172] (0xc0007025a0) (1) Data frame sent\nI0906 20:31:49.898292 488 log.go:172] (0xc0006e40b0) (0xc0007025a0) Stream removed, broadcasting: 1\nI0906 20:31:49.898421 488 log.go:172] (0xc0006e40b0) (0xc0007025a0) Stream removed, broadcasting: 1\nI0906 20:31:49.898437 488 log.go:172] (0xc0006e40b0) (0xc0004f8b40) Stream removed, broadcasting: 3\nI0906 20:31:49.898484 488 log.go:172] (0xc0006e40b0) Go away received\nI0906 20:31:49.898531 488 log.go:172] (0xc0006e40b0) (0xc0004f8c80) Stream removed, broadcasting: 5\n" Sep 6 20:31:49.901: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 6 20:31:49.901: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 6 20:31:49.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:31:50.098: INFO: stderr: "I0906 20:31:50.018054 511 log.go:172] (0xc0008542c0) (0xc0006fc640) Create stream\nI0906 20:31:50.018102 511 log.go:172] (0xc0008542c0) (0xc0006fc640) Stream added, broadcasting: 1\nI0906 20:31:50.021468 511 log.go:172] (0xc0008542c0) Reply frame received for 1\nI0906 20:31:50.021597 511 log.go:172] (0xc0008542c0) (0xc0006fc6e0) Create stream\nI0906 20:31:50.021676 511 log.go:172] (0xc0008542c0) (0xc0006fc6e0) Stream added, broadcasting: 3\nI0906 20:31:50.022929 511 log.go:172] (0xc0008542c0) Reply frame received for 3\nI0906 20:31:50.022986 511 log.go:172] (0xc0008542c0) (0xc0005e8c80) Create stream\nI0906 20:31:50.023012 511 log.go:172] (0xc0008542c0) (0xc0005e8c80) Stream added, broadcasting: 5\nI0906 20:31:50.023730 511 log.go:172] (0xc0008542c0) Reply frame received for 5\nI0906 20:31:50.092894 511 log.go:172] (0xc0008542c0) Data frame received for 3\nI0906 20:31:50.092935 511 log.go:172] (0xc0006fc6e0) (3) Data frame handling\nI0906 20:31:50.092963 511 log.go:172] (0xc0006fc6e0) (3) Data frame sent\nI0906 20:31:50.092979 511 log.go:172] (0xc0008542c0) Data frame received for 3\nI0906 20:31:50.092993 511 log.go:172] (0xc0006fc6e0) (3) Data frame handling\nI0906 20:31:50.093128 511 log.go:172] (0xc0008542c0) Data frame received for 5\nI0906 20:31:50.093168 511 log.go:172] (0xc0005e8c80) (5) Data frame handling\nI0906 20:31:50.093196 511 log.go:172] (0xc0005e8c80) (5) Data frame sent\nI0906 20:31:50.093214 511 log.go:172] (0xc0008542c0) Data frame received for 5\nI0906 20:31:50.093231 511 log.go:172] (0xc0005e8c80) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0906 20:31:50.094750 511 log.go:172] (0xc0008542c0) Data frame received for 1\nI0906 20:31:50.094783 511 log.go:172] (0xc0006fc640) (1) Data frame handling\nI0906 20:31:50.094817 511 log.go:172] (0xc0006fc640) (1) Data frame sent\nI0906 20:31:50.094843 511 log.go:172] (0xc0008542c0) (0xc0006fc640) Stream removed, broadcasting: 1\nI0906 20:31:50.094976 511 log.go:172] (0xc0008542c0) Go away received\nI0906 20:31:50.095061 511 log.go:172] (0xc0008542c0) (0xc0006fc640) Stream removed, broadcasting: 1\nI0906 20:31:50.095082 511 log.go:172] (0xc0008542c0) (0xc0006fc6e0) Stream removed, broadcasting: 3\nI0906 20:31:50.095095 511 log.go:172] (0xc0008542c0) (0xc0005e8c80) Stream removed, broadcasting: 5\n" Sep 6 20:31:50.098: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 6 20:31:50.098: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 6 20:31:50.101: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 6 20:31:50.101: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 6 20:31:50.101: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Sep 6 20:31:50.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 6 20:31:50.277: INFO: stderr: "I0906 20:31:50.229163 533 log.go:172] (0xc0001386e0) (0xc0007912c0) Create stream\nI0906 20:31:50.229209 533 log.go:172] (0xc0001386e0) (0xc0007912c0) Stream added, broadcasting: 1\nI0906 20:31:50.231289 533 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0906 20:31:50.231332 533 log.go:172] (0xc0001386e0) (0xc0008a2000) Create stream\nI0906 20:31:50.231342 533 log.go:172] (0xc0001386e0) (0xc0008a2000) Stream added, broadcasting: 3\nI0906 20:31:50.232226 533 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0906 20:31:50.232257 533 log.go:172] (0xc0001386e0) (0xc000791360) Create stream\nI0906 20:31:50.232268 533 log.go:172] (0xc0001386e0) (0xc000791360) Stream added, broadcasting: 5\nI0906 20:31:50.232934 533 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0906 20:31:50.272621 533 log.go:172] (0xc0001386e0) Data frame received for 3\nI0906 20:31:50.272655 533 log.go:172] (0xc0008a2000) (3) Data frame handling\nI0906 20:31:50.272664 533 log.go:172] (0xc0008a2000) (3) Data frame sent\nI0906 20:31:50.272676 533 log.go:172] (0xc0001386e0) Data frame received for 3\nI0906 20:31:50.272689 533 log.go:172] (0xc0008a2000) (3) Data frame handling\nI0906 20:31:50.272710 533 log.go:172] (0xc0001386e0) Data frame received for 5\nI0906 20:31:50.272717 533 log.go:172] (0xc000791360) (5) Data frame handling\nI0906 20:31:50.273927 533 log.go:172] (0xc0001386e0) Data frame received for 1\nI0906 20:31:50.273953 533 log.go:172] (0xc0007912c0) (1) Data frame handling\nI0906 20:31:50.273963 533 log.go:172] (0xc0007912c0) (1) Data frame sent\nI0906 20:31:50.274047 533 log.go:172] (0xc0001386e0) (0xc0007912c0) Stream removed, broadcasting: 1\nI0906 20:31:50.274095 533 log.go:172] (0xc0001386e0) Go away received\nI0906 20:31:50.274396 533 log.go:172] (0xc0001386e0) (0xc0007912c0) Stream removed, broadcasting: 1\nI0906 20:31:50.274422 533 log.go:172] (0xc0001386e0) (0xc0008a2000) Stream removed, broadcasting: 3\nI0906 20:31:50.274434 533 log.go:172] (0xc0001386e0) (0xc000791360) Stream removed, broadcasting: 5\n" Sep 6 20:31:50.277: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 6 20:31:50.277: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 6 20:31:50.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 6 20:31:50.482: INFO: stderr: "I0906 20:31:50.387124 555 log.go:172] (0xc0005bc000) (0xc00072c000) Create stream\nI0906 20:31:50.387187 555 log.go:172] (0xc0005bc000) (0xc00072c000) Stream added, broadcasting: 1\nI0906 20:31:50.389494 555 log.go:172] (0xc0005bc000) Reply frame received for 1\nI0906 20:31:50.389525 555 log.go:172] (0xc0005bc000) (0xc00072c0a0) Create stream\nI0906 20:31:50.389537 555 log.go:172] (0xc0005bc000) (0xc00072c0a0) Stream added, broadcasting: 3\nI0906 20:31:50.390276 555 log.go:172] (0xc0005bc000) Reply frame received for 3\nI0906 20:31:50.390304 555 log.go:172] (0xc0005bc000) (0xc00072c1e0) Create stream\nI0906 20:31:50.390315 555 log.go:172] (0xc0005bc000) (0xc00072c1e0) Stream added, broadcasting: 5\nI0906 20:31:50.391010 555 log.go:172] (0xc0005bc000) Reply frame received for 5\nI0906 20:31:50.477061 555 log.go:172] (0xc0005bc000) Data frame received for 3\nI0906 20:31:50.477106 555 log.go:172] (0xc00072c0a0) (3) Data frame handling\nI0906 20:31:50.477140 555 log.go:172] (0xc00072c0a0) (3) Data frame sent\nI0906 20:31:50.477165 555 log.go:172] (0xc0005bc000) Data frame received for 3\nI0906 20:31:50.477181 555 log.go:172] (0xc00072c0a0) (3) Data frame handling\nI0906 20:31:50.477318 555 log.go:172] (0xc0005bc000) Data frame received for 5\nI0906 20:31:50.477338 555 log.go:172] (0xc00072c1e0) (5) Data frame handling\nI0906 20:31:50.478761 555 log.go:172] (0xc0005bc000) Data frame received for 1\nI0906 20:31:50.478773 555 log.go:172] (0xc00072c000) (1) Data frame handling\nI0906 20:31:50.478780 555 log.go:172] (0xc00072c000) (1) Data frame sent\nI0906 20:31:50.478788 555 log.go:172] (0xc0005bc000) (0xc00072c000) Stream removed, broadcasting: 1\nI0906 20:31:50.478890 555 log.go:172] (0xc0005bc000) (0xc00072c000) Stream removed, broadcasting: 1\nI0906 20:31:50.478901 555 log.go:172] (0xc0005bc000) (0xc00072c0a0) Stream removed, broadcasting: 3\nI0906 20:31:50.478995 555 log.go:172] (0xc0005bc000) (0xc00072c1e0) Stream removed, broadcasting: 5\nI0906 20:31:50.479015 555 log.go:172] (0xc0005bc000) Go away received\n" Sep 6 20:31:50.482: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 6 20:31:50.482: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 6 20:31:50.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 6 20:31:50.694: INFO: stderr: "I0906 20:31:50.600729 577 log.go:172] (0xc000138790) (0xc0005d9400) Create stream\nI0906 20:31:50.600786 577 log.go:172] (0xc000138790) (0xc0005d9400) Stream added, broadcasting: 1\nI0906 20:31:50.602713 577 log.go:172] (0xc000138790) Reply frame received for 1\nI0906 20:31:50.602781 577 log.go:172] (0xc000138790) (0xc000726000) Create stream\nI0906 20:31:50.602809 577 log.go:172] (0xc000138790) (0xc000726000) Stream added, broadcasting: 3\nI0906 20:31:50.603572 577 log.go:172] (0xc000138790) Reply frame received for 3\nI0906 20:31:50.603627 577 log.go:172] (0xc000138790) (0xc0004e4000) Create stream\nI0906 20:31:50.603642 577 log.go:172] (0xc000138790) (0xc0004e4000) Stream added, broadcasting: 5\nI0906 20:31:50.604341 577 log.go:172] (0xc000138790) Reply frame received for 5\nI0906 20:31:50.690022 577 log.go:172] (0xc000138790) Data frame received for 5\nI0906 20:31:50.690127 577 log.go:172] (0xc0004e4000) (5) Data frame handling\nI0906 20:31:50.690178 577 log.go:172] (0xc000138790) Data frame received for 3\nI0906 20:31:50.690206 577 log.go:172] (0xc000726000) (3) Data frame handling\nI0906 20:31:50.690244 577 log.go:172] (0xc000726000) (3) Data frame sent\nI0906 20:31:50.690272 577 log.go:172] (0xc000138790) Data frame received for 3\nI0906 20:31:50.690291 577 log.go:172] (0xc000726000) (3) Data frame handling\nI0906 20:31:50.691489 577 log.go:172] (0xc000138790) Data frame received for 1\nI0906 20:31:50.691526 577 log.go:172] (0xc0005d9400) (1) Data frame handling\nI0906 20:31:50.691562 577 log.go:172] (0xc0005d9400) (1) Data frame sent\nI0906 20:31:50.691589 577 log.go:172] (0xc000138790) (0xc0005d9400) Stream removed, broadcasting: 1\nI0906 20:31:50.691620 577 log.go:172] (0xc000138790) Go away received\nI0906 20:31:50.691837 577 log.go:172] (0xc000138790) (0xc0005d9400) Stream removed, broadcasting: 1\nI0906 20:31:50.691860 577 log.go:172] (0xc000138790) (0xc000726000) Stream removed, broadcasting: 3\nI0906 20:31:50.691878 577 log.go:172] (0xc000138790) (0xc0004e4000) Stream removed, broadcasting: 5\n" Sep 6 20:31:50.695: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 6 20:31:50.695: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 6 20:31:50.695: INFO: Waiting for statefulset status.replicas updated to 0 Sep 6 20:31:50.697: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Sep 6 20:32:00.703: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 6 20:32:00.703: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 6 20:32:00.703: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 6 20:32:00.713: INFO: POD NODE PHASE GRACE CONDITIONS Sep 6 20:32:00.713: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC }] Sep 6 20:32:00.713: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:00.713: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:00.713: INFO: Sep 6 20:32:00.713: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 6 20:32:02.015: INFO: POD NODE PHASE GRACE CONDITIONS Sep 6 20:32:02.015: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC }] Sep 6 20:32:02.015: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:02.015: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:02.015: INFO: Sep 6 20:32:02.015: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 6 20:32:03.052: INFO: POD NODE PHASE GRACE CONDITIONS Sep 6 20:32:03.052: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC }] Sep 6 20:32:03.052: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:03.052: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:03.052: INFO: Sep 6 20:32:03.052: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 6 20:32:04.143: INFO: POD NODE PHASE GRACE CONDITIONS Sep 6 20:32:04.143: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC }] Sep 6 20:32:04.143: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:04.143: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:04.143: INFO: Sep 6 20:32:04.143: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 6 20:32:05.147: INFO: POD NODE PHASE GRACE CONDITIONS Sep 6 20:32:05.147: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC }] Sep 6 20:32:05.148: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:05.148: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:05.148: INFO: Sep 6 20:32:05.148: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 6 20:32:06.152: INFO: POD NODE PHASE GRACE CONDITIONS Sep 6 20:32:06.152: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC }] Sep 6 20:32:06.152: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:06.152: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:06.152: INFO: Sep 6 20:32:06.152: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 6 20:32:07.156: INFO: POD NODE PHASE GRACE CONDITIONS Sep 6 20:32:07.156: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC }] Sep 6 20:32:07.156: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:07.156: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:07.156: INFO: Sep 6 20:32:07.156: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 6 20:32:08.679: INFO: POD NODE PHASE GRACE CONDITIONS Sep 6 20:32:08.679: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC }] Sep 6 20:32:08.679: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:08.679: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:08.679: INFO: Sep 6 20:32:08.679: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 6 20:32:10.336: INFO: POD NODE PHASE GRACE CONDITIONS Sep 6 20:32:10.336: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:06 +0000 UTC }] Sep 6 20:32:10.336: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:10.336: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:31:26 +0000 UTC }] Sep 6 20:32:10.336: INFO: Sep 6 20:32:10.336: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-jt6fh Sep 6 20:32:11.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:32:20.580: INFO: rc: 1 Sep 6 20:32:20.580: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] I0906 20:32:11.452584 599 log.go:172] (0xc000166840) (0xc0007546e0) Create stream I0906 20:32:11.452649 599 log.go:172] (0xc000166840) (0xc0007546e0) Stream added, broadcasting: 1 I0906 20:32:11.454593 599 log.go:172] (0xc000166840) Reply frame received for 1 I0906 20:32:11.454631 599 log.go:172] (0xc000166840) (0xc0005f8d20) Create stream I0906 20:32:11.454642 599 log.go:172] (0xc000166840) (0xc0005f8d20) Stream added, broadcasting: 3 I0906 20:32:11.455358 599 log.go:172] (0xc000166840) Reply frame received for 3 I0906 20:32:11.455383 599 log.go:172] (0xc000166840) (0xc000754780) Create stream I0906 20:32:11.455390 599 log.go:172] (0xc000166840) (0xc000754780) Stream added, broadcasting: 5 I0906 20:32:11.456492 599 log.go:172] (0xc000166840) Reply frame received for 5 I0906 20:32:20.576441 599 log.go:172] (0xc000166840) Data frame received for 1 I0906 20:32:20.576469 599 log.go:172] (0xc0007546e0) (1) Data frame handling I0906 20:32:20.576480 599 log.go:172] (0xc0007546e0) (1) Data frame sent I0906 20:32:20.576490 599 log.go:172] (0xc000166840) (0xc0007546e0) Stream removed, broadcasting: 1 I0906 20:32:20.576733 599 log.go:172] (0xc000166840) (0xc0005f8d20) Stream removed, broadcasting: 3 I0906 20:32:20.576755 599 log.go:172] (0xc000166840) (0xc000754780) Stream removed, broadcasting: 5 I0906 20:32:20.576770 599 log.go:172] (0xc000166840) (0xc0007546e0) Stream removed, broadcasting: 1 I0906 20:32:20.576778 599 log.go:172] (0xc000166840) (0xc0005f8d20) Stream removed, broadcasting: 3 I0906 20:32:20.576787 599 log.go:172] (0xc000166840) (0xc000754780) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "c09767b1016aad037250d1ea18957ee275200cc1d99e462271ac8ec8a1e38d15": cannot exec in a deleted state: unknown [] 0xc001f847e0 exit status 1 true [0xc000a16560 0xc000a16798 0xc000a168a8] [0xc000a16560 0xc000a16798 0xc000a168a8] [0xc000a16780 0xc000a167f8] [0x935700 0x935700] 0xc0018c0480 }: Command stdout: stderr: I0906 20:32:11.452584 599 log.go:172] (0xc000166840) (0xc0007546e0) Create stream I0906 20:32:11.452649 599 log.go:172] (0xc000166840) (0xc0007546e0) Stream added, broadcasting: 1 I0906 20:32:11.454593 599 log.go:172] (0xc000166840) Reply frame received for 1 I0906 20:32:11.454631 599 log.go:172] (0xc000166840) (0xc0005f8d20) Create stream I0906 20:32:11.454642 599 log.go:172] (0xc000166840) (0xc0005f8d20) Stream added, broadcasting: 3 I0906 20:32:11.455358 599 log.go:172] (0xc000166840) Reply frame received for 3 I0906 20:32:11.455383 599 log.go:172] (0xc000166840) (0xc000754780) Create stream I0906 20:32:11.455390 599 log.go:172] (0xc000166840) (0xc000754780) Stream added, broadcasting: 5 I0906 20:32:11.456492 599 log.go:172] (0xc000166840) Reply frame received for 5 I0906 20:32:20.576441 599 log.go:172] (0xc000166840) Data frame received for 1 I0906 20:32:20.576469 599 log.go:172] (0xc0007546e0) (1) Data frame handling I0906 20:32:20.576480 599 log.go:172] (0xc0007546e0) (1) Data frame sent I0906 20:32:20.576490 599 log.go:172] (0xc000166840) (0xc0007546e0) Stream removed, broadcasting: 1 I0906 20:32:20.576733 599 log.go:172] (0xc000166840) (0xc0005f8d20) Stream removed, broadcasting: 3 I0906 20:32:20.576755 599 log.go:172] (0xc000166840) (0xc000754780) Stream removed, broadcasting: 5 I0906 20:32:20.576770 599 log.go:172] (0xc000166840) (0xc0007546e0) Stream removed, broadcasting: 1 I0906 20:32:20.576778 599 log.go:172] (0xc000166840) (0xc0005f8d20) Stream removed, broadcasting: 3 I0906 20:32:20.576787 599 log.go:172] (0xc000166840) (0xc000754780) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "c09767b1016aad037250d1ea18957ee275200cc1d99e462271ac8ec8a1e38d15": cannot exec in a deleted state: unknown error: exit status 1 Sep 6 20:32:30.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:32:30.721: INFO: rc: 1 Sep 6 20:32:30.721: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f84930 exit status 1 true [0xc000a16940 0xc000a169a8 0xc000a16a08] [0xc000a16940 0xc000a169a8 0xc000a16a08] [0xc000a169a0 0xc000a169e8] [0x935700 0x935700] 0xc0018c0840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:32:40.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:32:40.813: INFO: rc: 1 Sep 6 20:32:40.813: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f84a50 exit status 1 true [0xc000a16a10 0xc000a16af0 0xc000a16bd0] [0xc000a16a10 0xc000a16af0 0xc000a16bd0] [0xc000a16a48 0xc000a16bb8] [0x935700 0x935700] 0xc0018c0c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:32:50.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:32:50.912: INFO: rc: 1 Sep 6 20:32:50.912: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017ba1b0 exit status 1 true [0xc00000e228 0xc000f8c008 0xc000f8c020] [0xc00000e228 0xc000f8c008 0xc000f8c020] [0xc000f8c000 0xc000f8c018] [0x935700 0x935700] 0xc001b281e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:33:00.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:33:00.998: INFO: rc: 1 Sep 6 20:33:00.998: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d58120 exit status 1 true [0xc0004c00c8 0xc0004c0130 0xc0004c0160] [0xc0004c00c8 0xc0004c0130 0xc0004c0160] [0xc0004c0110 0xc0004c0150] [0x935700 0x935700] 0xc0010caea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:33:10.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:33:11.074: INFO: rc: 1 Sep 6 20:33:11.074: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f48150 exit status 1 true [0xc000dc6000 0xc000dc6018 0xc000dc6030] [0xc000dc6000 0xc000dc6018 0xc000dc6030] [0xc000dc6010 0xc000dc6028] [0x935700 0x935700] 0xc0009b4780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:33:21.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:33:21.152: INFO: rc: 1 Sep 6 20:33:21.152: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f84c00 exit status 1 true [0xc000a16be8 0xc000a16c10 0xc000a16d18] [0xc000a16be8 0xc000a16c10 0xc000a16d18] [0xc000a16c08 0xc000a16d08] [0x935700 0x935700] 0xc0018c0f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:33:31.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:33:31.233: INFO: rc: 1 Sep 6 20:33:31.233: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f482a0 exit status 1 true [0xc000dc6038 0xc000dc6050 0xc000dc6068] [0xc000dc6038 0xc000dc6050 0xc000dc6068] [0xc000dc6048 0xc000dc6060] [0x935700 0x935700] 0xc0009b4c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:33:41.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:33:41.314: INFO: rc: 1 Sep 6 20:33:41.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017ba2d0 exit status 1 true [0xc000f8c028 0xc000f8c040 0xc000f8c058] [0xc000f8c028 0xc000f8c040 0xc000f8c058] [0xc000f8c038 0xc000f8c050] [0x935700 0x935700] 0xc001b28480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:33:51.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:33:51.394: INFO: rc: 1 Sep 6 20:33:51.394: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f84d20 exit status 1 true [0xc000a16da8 0xc000a16f60 0xc000a16fe0] [0xc000a16da8 0xc000a16f60 0xc000a16fe0] [0xc000a16f50 0xc000a16fb8] [0x935700 0x935700] 0xc0018c1380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:34:01.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:34:01.472: INFO: rc: 1 Sep 6 20:34:01.472: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017ba480 exit status 1 true [0xc000f8c068 0xc000f8c0a0 0xc000f8c0f8] [0xc000f8c068 0xc000f8c0a0 0xc000f8c0f8] [0xc000f8c088 0xc000f8c0e0] [0x935700 0x935700] 0xc001b287e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:34:11.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:34:11.564: INFO: rc: 1 Sep 6 20:34:11.564: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f84e40 exit status 1 true [0xc000a17060 0xc000a17128 0xc000a17190] [0xc000a17060 0xc000a17128 0xc000a17190] [0xc000a170e8 0xc000a17180] [0x935700 0x935700] 0xc0009b0000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:34:21.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:34:21.650: INFO: rc: 1 Sep 6 20:34:21.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f48180 exit status 1 true [0xc00016e000 0xc000dc6008 0xc000dc6020] [0xc00016e000 0xc000dc6008 0xc000dc6020] [0xc000dc6000 0xc000dc6018] [0x935700 0x935700] 0xc0018c03c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:34:31.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:34:31.733: INFO: rc: 1 Sep 6 20:34:31.733: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f84180 exit status 1 true [0xc000f8c000 0xc000f8c018 0xc000f8c030] [0xc000f8c000 0xc000f8c018 0xc000f8c030] [0xc000f8c010 0xc000f8c028] [0x935700 0x935700] 0xc0009b4780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:34:41.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:34:41.814: INFO: rc: 1 Sep 6 20:34:41.814: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f84720 exit status 1 true [0xc000f8c038 0xc000f8c050 0xc000f8c080] [0xc000f8c038 0xc000f8c050 0xc000f8c080] [0xc000f8c048 0xc000f8c068] [0x935700 0x935700] 0xc0009b4c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:34:51.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:34:51.901: INFO: rc: 1 Sep 6 20:34:51.901: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f84870 exit status 1 true [0xc000f8c088 0xc000f8c0e0 0xc000f8c108] [0xc000f8c088 0xc000f8c0e0 0xc000f8c108] [0xc000f8c0c0 0xc000f8c100] [0x935700 0x935700] 0xc0009b4fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:35:01.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:35:01.980: INFO: rc: 1 Sep 6 20:35:01.980: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d58180 exit status 1 true [0xc000a16000 0xc000a163a8 0xc000a16710] [0xc000a16000 0xc000a163a8 0xc000a16710] [0xc000a16238 0xc000a16560] [0x935700 0x935700] 0xc001b281e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:35:11.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:35:12.062: INFO: rc: 1 Sep 6 20:35:12.062: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017ba240 exit status 1 true [0xc0004c00c8 0xc0004c0130 0xc0004c0160] [0xc0004c00c8 0xc0004c0130 0xc0004c0160] [0xc0004c0110 0xc0004c0150] [0x935700 0x935700] 0xc0009b06c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:35:22.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:35:22.138: INFO: rc: 1 Sep 6 20:35:22.138: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f849c0 exit status 1 true [0xc000f8c110 0xc000f8c128 0xc000f8c148] [0xc000f8c110 0xc000f8c128 0xc000f8c148] [0xc000f8c120 0xc000f8c138] [0x935700 0x935700] 0xc0010caea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:35:32.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:35:32.215: INFO: rc: 1 Sep 6 20:35:32.215: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d58300 exit status 1 true [0xc000a16780 0xc000a167f8 0xc000a16988] [0xc000a16780 0xc000a167f8 0xc000a16988] [0xc000a167b8 0xc000a16940] [0x935700 0x935700] 0xc001b28480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:35:42.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:35:42.302: INFO: rc: 1 Sep 6 20:35:42.302: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d584b0 exit status 1 true [0xc000a169a0 0xc000a169e8 0xc000a16a30] [0xc000a169a0 0xc000a169e8 0xc000a16a30] [0xc000a169c0 0xc000a16a10] [0x935700 0x935700] 0xc001b287e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:35:52.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:35:52.387: INFO: rc: 1 Sep 6 20:35:52.387: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d58600 exit status 1 true [0xc000a16a48 0xc000a16bb8 0xc000a16bf8] [0xc000a16a48 0xc000a16bb8 0xc000a16bf8] [0xc000a16bb0 0xc000a16be8] [0x935700 0x935700] 0xc001b29f20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:36:02.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:36:02.463: INFO: rc: 1 Sep 6 20:36:02.463: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f48360 exit status 1 true [0xc000dc6028 0xc000dc6040 0xc000dc6058] [0xc000dc6028 0xc000dc6040 0xc000dc6058] [0xc000dc6038 0xc000dc6050] [0x935700 0x935700] 0xc0018c0720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:36:12.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:36:12.537: INFO: rc: 1 Sep 6 20:36:12.537: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d58120 exit status 1 true [0xc00016e000 0xc000a16238 0xc000a16560] [0xc00016e000 0xc000a16238 0xc000a16560] [0xc000a16178 0xc000a163e0] [0x935700 0x935700] 0xc001b281e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:36:22.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:36:22.622: INFO: rc: 1 Sep 6 20:36:22.622: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017ba1b0 exit status 1 true [0xc0004c00c8 0xc0004c0130 0xc0004c0160] [0xc0004c00c8 0xc0004c0130 0xc0004c0160] [0xc0004c0110 0xc0004c0150] [0x935700 0x935700] 0xc0009b4780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:36:32.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:36:32.701: INFO: rc: 1 Sep 6 20:36:32.701: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f48150 exit status 1 true [0xc000dc6000 0xc000dc6018 0xc000dc6030] [0xc000dc6000 0xc000dc6018 0xc000dc6030] [0xc000dc6010 0xc000dc6028] [0x935700 0x935700] 0xc0009b03c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:36:42.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:36:44.068: INFO: rc: 1 Sep 6 20:36:44.068: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017ba360 exit status 1 true [0xc0004c0180 0xc0004c01f0 0xc0004c0258] [0xc0004c0180 0xc0004c01f0 0xc0004c0258] [0xc0004c01d0 0xc0004c0248] [0x935700 0x935700] 0xc0009b4c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:36:54.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:36:54.162: INFO: rc: 1 Sep 6 20:36:54.162: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f841b0 exit status 1 true [0xc000f8c000 0xc000f8c018 0xc000f8c030] [0xc000f8c000 0xc000f8c018 0xc000f8c030] [0xc000f8c010 0xc000f8c028] [0x935700 0x935700] 0xc0018c03c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:37:04.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:37:04.250: INFO: rc: 1 Sep 6 20:37:04.250: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017ba510 exit status 1 true [0xc0004c0270 0xc0004c0330 0xc0004c03c0] [0xc0004c0270 0xc0004c0330 0xc0004c03c0] [0xc0004c0308 0xc0004c03b8] [0x935700 0x935700] 0xc0009b4fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 6 20:37:14.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jt6fh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:37:14.329: INFO: rc: 1 Sep 6 20:37:14.330: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Sep 6 20:37:14.330: INFO: Scaling statefulset ss to 0 Sep 6 20:37:14.335: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Sep 6 20:37:14.337: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jt6fh Sep 6 20:37:14.339: INFO: Scaling statefulset ss to 0 Sep 6 20:37:14.346: INFO: Waiting for statefulset status.replicas updated to 0 Sep 6 20:37:14.348: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:37:14.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-jt6fh" for this suite. Sep 6 20:37:20.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:37:20.423: INFO: namespace: e2e-tests-statefulset-jt6fh, resource: bindings, ignored listing per whitelist Sep 6 20:37:20.461: INFO: namespace e2e-tests-statefulset-jt6fh deletion completed in 6.093754956s • [SLOW TEST:374.536 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:37:20.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 6 20:37:20.935: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-mmh6s" to be "success or failure" Sep 6 20:37:20.961: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 26.091153ms Sep 6 20:37:22.964: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02942502s Sep 6 20:37:24.967: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031885843s Sep 6 20:37:27.462: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.52704735s Sep 6 20:37:29.553: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.618123922s Sep 6 20:37:31.556: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.621253888s Sep 6 20:37:33.560: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.624831214s Sep 6 20:37:35.563: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.628264272s Sep 6 20:37:37.566: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.630843276s Sep 6 20:37:39.568: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.633066523s Sep 6 20:37:41.571: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 20.636313873s Sep 6 20:37:43.574: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 22.638984871s Sep 6 20:37:45.577: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 24.642213215s Sep 6 20:37:47.580: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 26.645231495s Sep 6 20:37:50.052: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.116913348s Sep 6 20:37:52.055: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 31.120093428s Sep 6 20:37:54.202: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 33.267229117s Sep 6 20:37:56.206: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 35.271046289s Sep 6 20:37:58.850: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 37.914693439s Sep 6 20:38:00.854: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 39.918586251s Sep 6 20:38:02.857: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 41.921584485s Sep 6 20:38:05.483: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 44.54785404s Sep 6 20:38:07.486: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 46.551155996s Sep 6 20:38:09.994: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 49.059116649s Sep 6 20:38:11.999: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 51.064554639s Sep 6 20:38:14.003: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 53.067814928s Sep 6 20:38:16.005: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 55.070418695s Sep 6 20:38:18.009: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 57.073736609s Sep 6 20:38:20.011: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 59.075751817s Sep 6 20:38:22.014: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 1m1.078952738s Sep 6 20:38:24.017: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 1m3.081906211s Sep 6 20:38:26.020: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m5.085182499s STEP: Saw pod success Sep 6 20:38:26.020: INFO: Pod "downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:38:26.022: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008 container client-container: STEP: delete the pod Sep 6 20:38:26.065: INFO: Waiting for pod downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008 to disappear Sep 6 20:38:26.100: INFO: Pod downwardapi-volume-c3233f72-f080-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:38:26.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mmh6s" for this suite. Sep 6 20:38:32.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:38:32.173: INFO: namespace: e2e-tests-projected-mmh6s, resource: bindings, ignored listing per whitelist Sep 6 20:38:32.181: INFO: namespace e2e-tests-projected-mmh6s deletion completed in 6.078821082s • [SLOW TEST:71.720 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:38:32.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 6 20:38:32.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Sep 6 20:38:32.437: INFO: stderr: "" Sep 6 20:38:32.437: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-09-06T19:06:33Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:38:32.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tlc4v" for this suite. Sep 6 20:38:38.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:38:38.507: INFO: namespace: e2e-tests-kubectl-tlc4v, resource: bindings, ignored listing per whitelist Sep 6 20:38:38.527: INFO: namespace e2e-tests-kubectl-tlc4v deletion completed in 6.086413052s • [SLOW TEST:6.346 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:38:38.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 6 20:38:38.617: INFO: Pod name rollover-pod: Found 0 pods out of 1 Sep 6 20:38:43.627: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 6 20:38:45.632: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Sep 6 20:38:47.636: INFO: Creating deployment "test-rollover-deployment" Sep 6 20:38:47.698: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Sep 6 20:38:49.703: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Sep 6 20:38:49.708: INFO: Ensure that both replica sets have 1 created replica Sep 6 20:38:49.712: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Sep 6 20:38:49.717: INFO: Updating deployment test-rollover-deployment Sep 6 20:38:49.717: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Sep 6 20:38:51.745: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Sep 6 20:38:51.751: INFO: Make sure deployment "test-rollover-deployment" is complete Sep 6 20:38:51.754: INFO: all replica sets need to contain the pod-template-hash label Sep 6 20:38:51.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021530, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 6 20:38:53.762: INFO: all replica sets need to contain the pod-template-hash label Sep 6 20:38:53.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021530, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 6 20:38:55.763: INFO: all replica sets need to contain the pod-template-hash label Sep 6 20:38:55.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021530, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 6 20:38:57.759: INFO: all replica sets need to contain the pod-template-hash label Sep 6 20:38:57.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021537, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 6 20:38:59.762: INFO: all replica sets need to contain the pod-template-hash label Sep 6 20:38:59.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021537, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 6 20:39:01.760: INFO: all replica sets need to contain the pod-template-hash label Sep 6 20:39:01.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021537, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 6 20:39:03.761: INFO: all replica sets need to contain the pod-template-hash label Sep 6 20:39:03.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021537, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 6 20:39:05.762: INFO: all replica sets need to contain the pod-template-hash label Sep 6 20:39:05.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021537, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735021527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 6 20:39:07.760: INFO: Sep 6 20:39:07.760: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Sep 6 20:39:07.768: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-pqmsq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pqmsq/deployments/test-rollover-deployment,UID:f6dcecf3-f080-11ea-b060-0242ac120006,ResourceVersion:213587,Generation:2,CreationTimestamp:2020-09-06 20:38:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-09-06 20:38:47 +0000 UTC 2020-09-06 20:38:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-09-06 20:39:07 +0000 UTC 2020-09-06 20:38:47 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Sep 6 20:39:07.771: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-pqmsq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pqmsq/replicasets/test-rollover-deployment-5b8479fdb6,UID:f81a5fb8-f080-11ea-b060-0242ac120006,ResourceVersion:213578,Generation:2,CreationTimestamp:2020-09-06 20:38:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f6dcecf3-f080-11ea-b060-0242ac120006 0xc0020078a7 0xc0020078a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Sep 6 20:39:07.771: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Sep 6 20:39:07.771: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-pqmsq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pqmsq/replicasets/test-rollover-controller,UID:f1763914-f080-11ea-b060-0242ac120006,ResourceVersion:213586,Generation:2,CreationTimestamp:2020-09-06 20:38:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f6dcecf3-f080-11ea-b060-0242ac120006 0xc002007717 0xc002007718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Sep 6 20:39:07.771: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-pqmsq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pqmsq/replicasets/test-rollover-deployment-58494b7559,UID:f6e72f96-f080-11ea-b060-0242ac120006,ResourceVersion:213535,Generation:2,CreationTimestamp:2020-09-06 20:38:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f6dcecf3-f080-11ea-b060-0242ac120006 0xc0020077d7 0xc0020077d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Sep 6 20:39:07.774: INFO: Pod "test-rollover-deployment-5b8479fdb6-fgrbn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-fgrbn,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-pqmsq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pqmsq/pods/test-rollover-deployment-5b8479fdb6-fgrbn,UID:f82d128d-f080-11ea-b060-0242ac120006,ResourceVersion:213556,Generation:0,CreationTimestamp:2020-09-06 20:38:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 f81a5fb8-f080-11ea-b060-0242ac120006 0xc001c75eb7 0xc001c75eb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8dx2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8dx2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-8dx2k true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c75f30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c75f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:38:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:38:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:38:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:38:49 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.54,StartTime:2020-09-06 20:38:49 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-09-06 20:38:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://752c547a33fe10b1ffb8992aafeea4bc7d957e4dd91a8cc77dae68abdf41ed31}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:39:07.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-pqmsq" for this suite. Sep 6 20:39:21.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:39:21.965: INFO: namespace: e2e-tests-deployment-pqmsq, resource: bindings, ignored listing per whitelist Sep 6 20:39:21.981: INFO: namespace e2e-tests-deployment-pqmsq deletion completed in 14.204061418s • [SLOW TEST:43.453 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:39:21.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Sep 6 20:40:00.013: INFO: Pod pod-hostip-0c435c07-f081-11ea-b72c-0242ac110008 has hostIP: 172.18.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:40:00.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-pbf28" for this suite. Sep 6 20:40:22.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:40:22.039: INFO: namespace: e2e-tests-pods-pbf28, resource: bindings, ignored listing per whitelist Sep 6 20:40:22.081: INFO: namespace e2e-tests-pods-pbf28 deletion completed in 22.06492451s • [SLOW TEST:60.100 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:40:22.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Sep 6 20:40:22.187: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Sep 6 20:40:22.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-465t6' Sep 6 20:40:24.850: INFO: stderr: "" Sep 6 20:40:24.850: INFO: stdout: "service/redis-slave created\n" Sep 6 20:40:24.850: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Sep 6 20:40:24.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-465t6' Sep 6 20:40:25.150: INFO: stderr: "" Sep 6 20:40:25.150: INFO: stdout: "service/redis-master created\n" Sep 6 20:40:25.150: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Sep 6 20:40:25.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-465t6' Sep 6 20:40:25.520: INFO: stderr: "" Sep 6 20:40:25.520: INFO: stdout: "service/frontend created\n" Sep 6 20:40:25.521: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Sep 6 20:40:25.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-465t6' Sep 6 20:40:25.813: INFO: stderr: "" Sep 6 20:40:25.813: INFO: stdout: "deployment.extensions/frontend created\n" Sep 6 20:40:25.813: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 6 20:40:25.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-465t6' Sep 6 20:40:26.084: INFO: stderr: "" Sep 6 20:40:26.084: INFO: stdout: "deployment.extensions/redis-master created\n" Sep 6 20:40:26.084: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Sep 6 20:40:26.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-465t6' Sep 6 20:40:26.342: INFO: stderr: "" Sep 6 20:40:26.342: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Sep 6 20:40:26.342: INFO: Waiting for all frontend pods to be Running. Sep 6 20:41:51.394: INFO: Waiting for frontend to serve content. Sep 6 20:41:51.407: INFO: Trying to add a new entry to the guestbook. Sep 6 20:41:51.418: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Sep 6 20:41:51.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-465t6' Sep 6 20:41:51.618: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 6 20:41:51.618: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Sep 6 20:41:51.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-465t6' Sep 6 20:41:51.763: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 6 20:41:51.763: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Sep 6 20:41:51.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-465t6' Sep 6 20:41:51.892: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 6 20:41:51.892: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 6 20:41:51.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-465t6' Sep 6 20:41:51.981: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 6 20:41:51.981: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 6 20:41:51.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-465t6' Sep 6 20:41:52.121: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 6 20:41:52.121: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Sep 6 20:41:52.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-465t6' Sep 6 20:41:52.287: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 6 20:41:52.287: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:41:52.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-465t6" for this suite. Sep 6 20:44:04.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:44:04.395: INFO: namespace: e2e-tests-kubectl-465t6, resource: bindings, ignored listing per whitelist Sep 6 20:44:04.401: INFO: namespace e2e-tests-kubectl-465t6 deletion completed in 2m12.07311624s • [SLOW TEST:222.320 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:44:04.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Sep 6 20:44:47.034: INFO: Successfully updated pod "annotationupdateb3b9f661-f081-11ea-b72c-0242ac110008" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:44:49.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qblz6" for this suite. Sep 6 20:45:53.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:45:53.094: INFO: namespace: e2e-tests-projected-qblz6, resource: bindings, ignored listing per whitelist Sep 6 20:45:53.131: INFO: namespace e2e-tests-projected-qblz6 deletion completed in 1m4.057934196s • [SLOW TEST:108.730 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:45:53.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Sep 6 20:45:53.225: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:46:14.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-b6ppd" for this suite. Sep 6 20:46:20.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:46:20.897: INFO: namespace: e2e-tests-init-container-b6ppd, resource: bindings, ignored listing per whitelist Sep 6 20:46:20.934: INFO: namespace e2e-tests-init-container-b6ppd deletion completed in 6.112780688s • [SLOW TEST:27.803 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:46:20.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-hf7l7 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Sep 6 20:46:21.025: INFO: Found 0 stateful pods, waiting for 3 Sep 6 20:46:31.028: INFO: Found 1 stateful pods, waiting for 3 Sep 6 20:46:41.153: INFO: Found 2 stateful pods, waiting for 3 Sep 6 20:46:51.028: INFO: Found 2 stateful pods, waiting for 3 Sep 6 20:47:01.028: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 6 20:47:01.028: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 6 20:47:01.028: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Sep 6 20:47:01.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hf7l7 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 6 20:47:01.250: INFO: stderr: "I0906 20:47:01.146650 1529 log.go:172] (0xc000138790) (0xc000714640) Create stream\nI0906 20:47:01.146697 1529 log.go:172] (0xc000138790) (0xc000714640) Stream added, broadcasting: 1\nI0906 20:47:01.148558 1529 log.go:172] (0xc000138790) Reply frame received for 1\nI0906 20:47:01.148596 1529 log.go:172] (0xc000138790) (0xc000774c80) Create stream\nI0906 20:47:01.148609 1529 log.go:172] (0xc000138790) (0xc000774c80) Stream added, broadcasting: 3\nI0906 20:47:01.149429 1529 log.go:172] (0xc000138790) Reply frame received for 3\nI0906 20:47:01.149459 1529 log.go:172] (0xc000138790) (0xc0002ba000) Create stream\nI0906 20:47:01.149470 1529 log.go:172] (0xc000138790) (0xc0002ba000) Stream added, broadcasting: 5\nI0906 20:47:01.152673 1529 log.go:172] (0xc000138790) Reply frame received for 5\nI0906 20:47:01.245006 1529 log.go:172] (0xc000138790) Data frame received for 5\nI0906 20:47:01.245044 1529 log.go:172] (0xc0002ba000) (5) Data frame handling\nI0906 20:47:01.245066 1529 log.go:172] (0xc000138790) Data frame received for 3\nI0906 20:47:01.245077 1529 log.go:172] (0xc000774c80) (3) Data frame handling\nI0906 20:47:01.245088 1529 log.go:172] (0xc000774c80) (3) Data frame sent\nI0906 20:47:01.245099 1529 log.go:172] (0xc000138790) Data frame received for 3\nI0906 20:47:01.245109 1529 log.go:172] (0xc000774c80) (3) Data frame handling\nI0906 20:47:01.246637 1529 log.go:172] (0xc000138790) Data frame received for 1\nI0906 20:47:01.246652 1529 log.go:172] (0xc000714640) (1) Data frame handling\nI0906 20:47:01.246667 1529 log.go:172] (0xc000714640) (1) Data frame sent\nI0906 20:47:01.246677 1529 log.go:172] (0xc000138790) (0xc000714640) Stream removed, broadcasting: 1\nI0906 20:47:01.246794 1529 log.go:172] (0xc000138790) (0xc000714640) Stream removed, broadcasting: 1\nI0906 20:47:01.246808 1529 log.go:172] (0xc000138790) (0xc000774c80) Stream removed, broadcasting: 3\nI0906 20:47:01.246994 1529 log.go:172] (0xc000138790) Go away received\nI0906 20:47:01.247305 1529 log.go:172] (0xc000138790) (0xc0002ba000) Stream removed, broadcasting: 5\n" Sep 6 20:47:01.250: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 6 20:47:01.250: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Sep 6 20:47:11.277: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Sep 6 20:47:21.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hf7l7 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:47:21.477: INFO: stderr: "I0906 20:47:21.409839 1551 log.go:172] (0xc00016c840) (0xc000799360) Create stream\nI0906 20:47:21.409882 1551 log.go:172] (0xc00016c840) (0xc000799360) Stream added, broadcasting: 1\nI0906 20:47:21.411644 1551 log.go:172] (0xc00016c840) Reply frame received for 1\nI0906 20:47:21.411675 1551 log.go:172] (0xc00016c840) (0xc00059c000) Create stream\nI0906 20:47:21.411687 1551 log.go:172] (0xc00016c840) (0xc00059c000) Stream added, broadcasting: 3\nI0906 20:47:21.412606 1551 log.go:172] (0xc00016c840) Reply frame received for 3\nI0906 20:47:21.412637 1551 log.go:172] (0xc00016c840) (0xc0005fa000) Create stream\nI0906 20:47:21.412646 1551 log.go:172] (0xc00016c840) (0xc0005fa000) Stream added, broadcasting: 5\nI0906 20:47:21.413384 1551 log.go:172] (0xc00016c840) Reply frame received for 5\nI0906 20:47:21.473469 1551 log.go:172] (0xc00016c840) Data frame received for 3\nI0906 20:47:21.473493 1551 log.go:172] (0xc00059c000) (3) Data frame handling\nI0906 20:47:21.473506 1551 log.go:172] (0xc00059c000) (3) Data frame sent\nI0906 20:47:21.473515 1551 log.go:172] (0xc00016c840) Data frame received for 3\nI0906 20:47:21.473524 1551 log.go:172] (0xc00059c000) (3) Data frame handling\nI0906 20:47:21.473589 1551 log.go:172] (0xc00016c840) Data frame received for 5\nI0906 20:47:21.473598 1551 log.go:172] (0xc0005fa000) (5) Data frame handling\nI0906 20:47:21.474544 1551 log.go:172] (0xc00016c840) Data frame received for 1\nI0906 20:47:21.474630 1551 log.go:172] (0xc000799360) (1) Data frame handling\nI0906 20:47:21.474672 1551 log.go:172] (0xc000799360) (1) Data frame sent\nI0906 20:47:21.474709 1551 log.go:172] (0xc00016c840) (0xc000799360) Stream removed, broadcasting: 1\nI0906 20:47:21.474741 1551 log.go:172] (0xc00016c840) Go away received\nI0906 20:47:21.474858 1551 log.go:172] (0xc00016c840) (0xc000799360) Stream removed, broadcasting: 1\nI0906 20:47:21.474870 1551 log.go:172] (0xc00016c840) (0xc00059c000) Stream removed, broadcasting: 3\nI0906 20:47:21.474878 1551 log.go:172] (0xc00016c840) (0xc0005fa000) Stream removed, broadcasting: 5\n" Sep 6 20:47:21.477: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 6 20:47:21.477: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 6 20:47:31.495: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:47:31.495: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:47:31.495: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:47:31.495: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:47:41.501: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:47:41.501: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:47:41.501: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:47:41.501: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:47:51.499: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:47:51.499: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:47:51.499: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:48:01.502: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:48:01.502: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:48:01.502: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:48:11.512: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:48:11.512: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:48:11.512: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:48:21.639: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:48:21.639: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:48:21.639: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:48:31.501: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:48:31.501: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:48:31.501: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:48:41.500: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:48:41.500: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:48:51.503: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:48:51.503: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:49:01.500: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:49:01.500: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:49:11.501: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:49:11.501: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:49:21.552: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:49:21.553: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:49:32.556: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:49:32.556: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:49:41.525: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:49:41.525: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 6 20:49:51.500: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:50:01.502: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update STEP: Rolling back to a previous revision Sep 6 20:50:11.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hf7l7 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 6 20:50:11.754: INFO: stderr: "I0906 20:50:11.605278 1573 log.go:172] (0xc00015c840) (0xc000766640) Create stream\nI0906 20:50:11.605323 1573 log.go:172] (0xc00015c840) (0xc000766640) Stream added, broadcasting: 1\nI0906 20:50:11.607233 1573 log.go:172] (0xc00015c840) Reply frame received for 1\nI0906 20:50:11.607269 1573 log.go:172] (0xc00015c840) (0xc000614b40) Create stream\nI0906 20:50:11.607281 1573 log.go:172] (0xc00015c840) (0xc000614b40) Stream added, broadcasting: 3\nI0906 20:50:11.607838 1573 log.go:172] (0xc00015c840) Reply frame received for 3\nI0906 20:50:11.607862 1573 log.go:172] (0xc00015c840) (0xc0007666e0) Create stream\nI0906 20:50:11.607874 1573 log.go:172] (0xc00015c840) (0xc0007666e0) Stream added, broadcasting: 5\nI0906 20:50:11.608531 1573 log.go:172] (0xc00015c840) Reply frame received for 5\nI0906 20:50:11.746927 1573 log.go:172] (0xc00015c840) Data frame received for 3\nI0906 20:50:11.746953 1573 log.go:172] (0xc000614b40) (3) Data frame handling\nI0906 20:50:11.746970 1573 log.go:172] (0xc000614b40) (3) Data frame sent\nI0906 20:50:11.747203 1573 log.go:172] (0xc00015c840) Data frame received for 3\nI0906 20:50:11.747322 1573 log.go:172] (0xc000614b40) (3) Data frame handling\nI0906 20:50:11.747361 1573 log.go:172] (0xc00015c840) Data frame received for 5\nI0906 20:50:11.747377 1573 log.go:172] (0xc0007666e0) (5) Data frame handling\nI0906 20:50:11.748868 1573 log.go:172] (0xc00015c840) Data frame received for 1\nI0906 20:50:11.748907 1573 log.go:172] (0xc000766640) (1) Data frame handling\nI0906 20:50:11.748939 1573 log.go:172] (0xc000766640) (1) Data frame sent\nI0906 20:50:11.748974 1573 log.go:172] (0xc00015c840) (0xc000766640) Stream removed, broadcasting: 1\nI0906 20:50:11.749012 1573 log.go:172] (0xc00015c840) Go away received\nI0906 20:50:11.749113 1573 log.go:172] (0xc00015c840) (0xc000766640) Stream removed, broadcasting: 1\nI0906 20:50:11.749127 1573 log.go:172] (0xc00015c840) (0xc000614b40) Stream removed, broadcasting: 3\nI0906 20:50:11.749135 1573 log.go:172] (0xc00015c840) (0xc0007666e0) Stream removed, broadcasting: 5\n" Sep 6 20:50:11.754: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 6 20:50:11.754: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 6 20:50:21.789: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Sep 6 20:50:32.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hf7l7 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 6 20:50:32.227: INFO: stderr: "I0906 20:50:32.154806 1596 log.go:172] (0xc000138580) (0xc0005c3360) Create stream\nI0906 20:50:32.154851 1596 log.go:172] (0xc000138580) (0xc0005c3360) Stream added, broadcasting: 1\nI0906 20:50:32.156691 1596 log.go:172] (0xc000138580) Reply frame received for 1\nI0906 20:50:32.156726 1596 log.go:172] (0xc000138580) (0xc000210000) Create stream\nI0906 20:50:32.156737 1596 log.go:172] (0xc000138580) (0xc000210000) Stream added, broadcasting: 3\nI0906 20:50:32.157327 1596 log.go:172] (0xc000138580) Reply frame received for 3\nI0906 20:50:32.157350 1596 log.go:172] (0xc000138580) (0xc0005c3400) Create stream\nI0906 20:50:32.157361 1596 log.go:172] (0xc000138580) (0xc0005c3400) Stream added, broadcasting: 5\nI0906 20:50:32.157861 1596 log.go:172] (0xc000138580) Reply frame received for 5\nI0906 20:50:32.224301 1596 log.go:172] (0xc000138580) Data frame received for 3\nI0906 20:50:32.224324 1596 log.go:172] (0xc000210000) (3) Data frame handling\nI0906 20:50:32.224331 1596 log.go:172] (0xc000210000) (3) Data frame sent\nI0906 20:50:32.224335 1596 log.go:172] (0xc000138580) Data frame received for 3\nI0906 20:50:32.224339 1596 log.go:172] (0xc000210000) (3) Data frame handling\nI0906 20:50:32.224354 1596 log.go:172] (0xc000138580) Data frame received for 5\nI0906 20:50:32.224358 1596 log.go:172] (0xc0005c3400) (5) Data frame handling\nI0906 20:50:32.225315 1596 log.go:172] (0xc000138580) Data frame received for 1\nI0906 20:50:32.225338 1596 log.go:172] (0xc0005c3360) (1) Data frame handling\nI0906 20:50:32.225366 1596 log.go:172] (0xc0005c3360) (1) Data frame sent\nI0906 20:50:32.225468 1596 log.go:172] (0xc000138580) (0xc0005c3360) Stream removed, broadcasting: 1\nI0906 20:50:32.225494 1596 log.go:172] (0xc000138580) Go away received\nI0906 20:50:32.225660 1596 log.go:172] (0xc000138580) (0xc0005c3360) Stream removed, broadcasting: 1\nI0906 20:50:32.225699 1596 log.go:172] (0xc000138580) (0xc000210000) Stream removed, broadcasting: 3\nI0906 20:50:32.225718 1596 log.go:172] (0xc000138580) (0xc0005c3400) Stream removed, broadcasting: 5\n" Sep 6 20:50:32.227: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 6 20:50:32.227: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 6 20:50:42.288: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:50:42.288: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:50:42.288: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:50:42.288: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:50:52.414: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:50:52.414: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:50:52.414: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:50:52.414: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:02.350: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:51:02.350: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:02.350: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:02.350: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:12.295: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:51:12.295: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:12.295: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:12.295: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:22.293: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:51:22.293: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:22.293: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:22.293: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:32.338: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:51:32.339: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:32.339: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:42.314: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:51:42.315: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:42.315: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:51:52.295: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update Sep 6 20:51:52.295: INFO: Waiting for Pod e2e-tests-statefulset-hf7l7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 6 20:52:02.294: INFO: Waiting for StatefulSet e2e-tests-statefulset-hf7l7/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Sep 6 20:52:12.294: INFO: Deleting all statefulset in ns e2e-tests-statefulset-hf7l7 Sep 6 20:52:12.297: INFO: Scaling statefulset ss2 to 0 Sep 6 20:53:12.315: INFO: Waiting for statefulset status.replicas updated to 0 Sep 6 20:53:12.318: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:53:12.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-hf7l7" for this suite. Sep 6 20:53:20.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:53:20.392: INFO: namespace: e2e-tests-statefulset-hf7l7, resource: bindings, ignored listing per whitelist Sep 6 20:53:20.450: INFO: namespace e2e-tests-statefulset-hf7l7 deletion completed in 8.098766112s • [SLOW TEST:419.516 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:53:20.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-dq88r STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-dq88r to expose endpoints map[] Sep 6 20:53:20.588: INFO: Get endpoints failed (8.581238ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Sep 6 20:53:21.592: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-dq88r exposes endpoints map[] (1.012237078s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-dq88r STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-dq88r to expose endpoints map[pod1:[100]] Sep 6 20:53:26.761: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.16387506s elapsed, will retry) Sep 6 20:53:29.798: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-dq88r exposes endpoints map[pod1:[100]] (8.20055933s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-dq88r STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-dq88r to expose endpoints map[pod2:[101] pod1:[100]] Sep 6 20:53:32.892: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-dq88r exposes endpoints map[pod1:[100] pod2:[101]] (3.090927118s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-dq88r STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-dq88r to expose endpoints map[pod2:[101]] Sep 6 20:53:33.912: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-dq88r exposes endpoints map[pod2:[101]] (1.016565506s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-dq88r STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-dq88r to expose endpoints map[] Sep 6 20:53:34.921: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-dq88r exposes endpoints map[] (1.005021108s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:53:34.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-dq88r" for this suite. Sep 6 20:53:41.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:53:41.065: INFO: namespace: e2e-tests-services-dq88r, resource: bindings, ignored listing per whitelist Sep 6 20:53:41.078: INFO: namespace e2e-tests-services-dq88r deletion completed in 6.086636028s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:20.627 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:53:41.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Sep 6 20:53:41.171: INFO: Waiting up to 5m0s for pod "downward-api-0b70e02f-f083-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-2tkfm" to be "success or failure" Sep 6 20:53:41.175: INFO: Pod "downward-api-0b70e02f-f083-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579687ms Sep 6 20:53:43.178: INFO: Pod "downward-api-0b70e02f-f083-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007434662s Sep 6 20:53:45.182: INFO: Pod "downward-api-0b70e02f-f083-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011198456s STEP: Saw pod success Sep 6 20:53:45.182: INFO: Pod "downward-api-0b70e02f-f083-11ea-b72c-0242ac110008" satisfied condition "success or failure" Sep 6 20:53:45.185: INFO: Trying to get logs from node hunter-worker pod downward-api-0b70e02f-f083-11ea-b72c-0242ac110008 container dapi-container: STEP: delete the pod Sep 6 20:53:45.216: INFO: Waiting for pod downward-api-0b70e02f-f083-11ea-b72c-0242ac110008 to disappear Sep 6 20:53:45.235: INFO: Pod downward-api-0b70e02f-f083-11ea-b72c-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:53:45.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2tkfm" for this suite. Sep 6 20:53:51.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:53:51.305: INFO: namespace: e2e-tests-downward-api-2tkfm, resource: bindings, ignored listing per whitelist Sep 6 20:53:51.346: INFO: namespace e2e-tests-downward-api-2tkfm deletion completed in 6.108180893s • [SLOW TEST:10.268 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:53:51.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 6 20:53:51.475: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Sep 6 20:53:56.478: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 6 20:53:56.479: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Sep 6 20:53:56.514: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-qjrsz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qjrsz/deployments/test-cleanup-deployment,UID:1493f6f5-f083-11ea-b060-0242ac120006,ResourceVersion:215987,Generation:1,CreationTimestamp:2020-09-06 20:53:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Sep 6 20:53:56.524: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Sep 6 20:53:56.524: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Sep 6 20:53:56.524: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-qjrsz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qjrsz/replicasets/test-cleanup-controller,UID:118f9247-f083-11ea-b060-0242ac120006,ResourceVersion:215988,Generation:1,CreationTimestamp:2020-09-06 20:53:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 1493f6f5-f083-11ea-b060-0242ac120006 0xc001a7c0e7 0xc001a7c0e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Sep 6 20:53:56.538: INFO: Pod "test-cleanup-controller-lj5rw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-lj5rw,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-qjrsz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qjrsz/pods/test-cleanup-controller-lj5rw,UID:119816b5-f083-11ea-b060-0242ac120006,ResourceVersion:215983,Generation:0,CreationTimestamp:2020-09-06 20:53:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 118f9247-f083-11ea-b060-0242ac120006 0xc001a7c767 0xc001a7c768}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7hbhx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hbhx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hbhx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a7c7e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a7c800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:53:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:53:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:53:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:53:51 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.65,StartTime:2020-09-06 20:53:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-06 20:53:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ea6bd4a981fbe7c0d267aa5d3797f223eec58776a0d1a0503aa58aa6c17c400f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:53:56.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-qjrsz" for this suite. Sep 6 20:54:04.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:54:04.780: INFO: namespace: e2e-tests-deployment-qjrsz, resource: bindings, ignored listing per whitelist Sep 6 20:54:04.783: INFO: namespace e2e-tests-deployment-qjrsz deletion completed in 8.136340181s • [SLOW TEST:13.437 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:54:04.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:54:11.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-szj44" for this suite. Sep 6 20:54:17.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:54:17.194: INFO: namespace: e2e-tests-namespaces-szj44, resource: bindings, ignored listing per whitelist Sep 6 20:54:17.237: INFO: namespace e2e-tests-namespaces-szj44 deletion completed in 6.073655728s STEP: Destroying namespace "e2e-tests-nsdeletetest-vgsd9" for this suite. Sep 6 20:54:17.239: INFO: Namespace e2e-tests-nsdeletetest-vgsd9 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-2w5rw" for this suite. Sep 6 20:54:23.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:54:23.303: INFO: namespace: e2e-tests-nsdeletetest-2w5rw, resource: bindings, ignored listing per whitelist Sep 6 20:54:23.346: INFO: namespace e2e-tests-nsdeletetest-2w5rw deletion completed in 6.106405636s • [SLOW TEST:18.562 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:54:23.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 6 20:54:23.444: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:54:33.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dvttw" for this suite. Sep 6 20:55:37.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:55:37.540: INFO: namespace: e2e-tests-pods-dvttw, resource: bindings, ignored listing per whitelist Sep 6 20:55:37.578: INFO: namespace e2e-tests-pods-dvttw deletion completed in 1m4.091408924s • [SLOW TEST:74.232 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:55:37.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:55:37.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-6ccvq" for this suite. Sep 6 20:55:59.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:55:59.803: INFO: namespace: e2e-tests-pods-6ccvq, resource: bindings, ignored listing per whitelist Sep 6 20:55:59.811: INFO: namespace e2e-tests-pods-6ccvq deletion completed in 22.098269546s • [SLOW TEST:22.233 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:55:59.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 6 20:55:59.926: INFO: Creating deployment "nginx-deployment" Sep 6 20:55:59.940: INFO: Waiting for observed generation 1 Sep 6 20:56:02.132: INFO: Waiting for all required pods to come up Sep 6 20:56:02.135: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Sep 6 20:56:10.147: INFO: Waiting for deployment "nginx-deployment" to complete Sep 6 20:56:10.152: INFO: Updating deployment "nginx-deployment" with a non-existent image Sep 6 20:56:10.184: INFO: Updating deployment nginx-deployment Sep 6 20:56:10.184: INFO: Waiting for observed generation 2 Sep 6 20:56:12.254: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Sep 6 20:56:12.257: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Sep 6 20:56:12.259: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Sep 6 20:56:12.267: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Sep 6 20:56:12.267: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Sep 6 20:56:12.269: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Sep 6 20:56:12.272: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Sep 6 20:56:12.272: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Sep 6 20:56:12.279: INFO: Updating deployment nginx-deployment Sep 6 20:56:12.279: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Sep 6 20:56:12.457: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Sep 6 20:56:12.503: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Sep 6 20:56:12.813: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d6h6m/deployments/nginx-deployment,UID:5e27e6b5-f083-11ea-b060-0242ac120006,ResourceVersion:216551,Generation:3,CreationTimestamp:2020-09-06 20:55:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-09-06 20:56:11 +0000 UTC 2020-09-06 20:55:59 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-09-06 20:56:12 +0000 UTC 2020-09-06 20:56:12 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Sep 6 20:56:12.855: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d6h6m/replicasets/nginx-deployment-5c98f8fb5,UID:64465e38-f083-11ea-b060-0242ac120006,ResourceVersion:216596,Generation:3,CreationTimestamp:2020-09-06 20:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 5e27e6b5-f083-11ea-b060-0242ac120006 0xc0017a8f37 0xc0017a8f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Sep 6 20:56:12.855: INFO: All old ReplicaSets of Deployment "nginx-deployment": Sep 6 20:56:12.856: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d6h6m/replicasets/nginx-deployment-85ddf47c5d,UID:5e2aff23-f083-11ea-b060-0242ac120006,ResourceVersion:216590,Generation:3,CreationTimestamp:2020-09-06 20:55:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 5e27e6b5-f083-11ea-b060-0242ac120006 0xc0017a8ff7 0xc0017a8ff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Sep 6 20:56:12.939: INFO: Pod "nginx-deployment-5c98f8fb5-86dm7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-86dm7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-5c98f8fb5-86dm7,UID:64690e38-f083-11ea-b060-0242ac120006,ResourceVersion:216532,Generation:0,CreationTimestamp:2020-09-06 20:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 64465e38-f083-11ea-b060-0242ac120006 0xc0021c3637 0xc0021c3638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021c36b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021c36d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-06 20:56:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.939: INFO: Pod "nginx-deployment-5c98f8fb5-bsdkx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bsdkx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-5c98f8fb5-bsdkx,UID:65bc2c74-f083-11ea-b060-0242ac120006,ResourceVersion:216591,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 64465e38-f083-11ea-b060-0242ac120006 0xc0021c3790 0xc0021c3791}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021c3810} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021c3830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.939: INFO: Pod "nginx-deployment-5c98f8fb5-c7lzr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-c7lzr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-5c98f8fb5-c7lzr,UID:659ff3e4-f083-11ea-b060-0242ac120006,ResourceVersion:216602,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 64465e38-f083-11ea-b060-0242ac120006 0xc0021c38a0 0xc0021c38a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021c3920} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021c3940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-06 20:56:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.939: INFO: Pod "nginx-deployment-5c98f8fb5-dzj5x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dzj5x,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-5c98f8fb5-dzj5x,UID:644c3bd7-f083-11ea-b060-0242ac120006,ResourceVersion:216528,Generation:0,CreationTimestamp:2020-09-06 20:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 64465e38-f083-11ea-b060-0242ac120006 0xc0021c3a00 0xc0021c3a01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021c3a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021c3aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-06 20:56:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.939: INFO: Pod "nginx-deployment-5c98f8fb5-jslhm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jslhm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-5c98f8fb5-jslhm,UID:64664f0d-f083-11ea-b060-0242ac120006,ResourceVersion:216521,Generation:0,CreationTimestamp:2020-09-06 20:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 64465e38-f083-11ea-b060-0242ac120006 0xc0021c3b60 0xc0021c3b61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021c3be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021c3c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-06 20:56:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.939: INFO: Pod "nginx-deployment-5c98f8fb5-mjcl2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mjcl2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-5c98f8fb5-mjcl2,UID:65aee4e4-f083-11ea-b060-0242ac120006,ResourceVersion:216574,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 64465e38-f083-11ea-b060-0242ac120006 0xc0021c3cc0 0xc0021c3cc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021c3d40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021c3d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.939: INFO: Pod "nginx-deployment-5c98f8fb5-n8gt6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n8gt6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-5c98f8fb5-n8gt6,UID:65a747db-f083-11ea-b060-0242ac120006,ResourceVersion:216560,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 64465e38-f083-11ea-b060-0242ac120006 0xc0021c3dd0 0xc0021c3dd1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021c3e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021c3e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.940: INFO: Pod "nginx-deployment-5c98f8fb5-r7szt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-r7szt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-5c98f8fb5-r7szt,UID:644c35a4-f083-11ea-b060-0242ac120006,ResourceVersion:216510,Generation:0,CreationTimestamp:2020-09-06 20:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 64465e38-f083-11ea-b060-0242ac120006 0xc0021c3ee0 0xc0021c3ee1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021c3f60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021c3f80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-06 20:56:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.940: INFO: Pod "nginx-deployment-5c98f8fb5-rkrc8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rkrc8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-5c98f8fb5-rkrc8,UID:65ae9a06-f083-11ea-b060-0242ac120006,ResourceVersion:216576,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 64465e38-f083-11ea-b060-0242ac120006 0xc001888040 0xc001888041}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018881c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018881e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.940: INFO: Pod "nginx-deployment-5c98f8fb5-rpkb9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rpkb9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-5c98f8fb5-rpkb9,UID:65aee32c-f083-11ea-b060-0242ac120006,ResourceVersion:216579,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 64465e38-f083-11ea-b060-0242ac120006 0xc001888260 0xc001888261}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018882e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001888300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.940: INFO: Pod "nginx-deployment-5c98f8fb5-s4tsh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-s4tsh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-5c98f8fb5-s4tsh,UID:6449378c-f083-11ea-b060-0242ac120006,ResourceVersion:216515,Generation:0,CreationTimestamp:2020-09-06 20:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 64465e38-f083-11ea-b060-0242ac120006 0xc001888370 0xc001888371}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018883f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001888410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:10 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-06 20:56:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.940: INFO: Pod "nginx-deployment-5c98f8fb5-svf45" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-svf45,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-5c98f8fb5-svf45,UID:65a70ec4-f083-11ea-b060-0242ac120006,ResourceVersion:216557,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 64465e38-f083-11ea-b060-0242ac120006 0xc0018884d0 0xc0018884d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018885b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018885d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.940: INFO: Pod "nginx-deployment-5c98f8fb5-z2pf8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z2pf8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-5c98f8fb5-z2pf8,UID:65af0b63-f083-11ea-b060-0242ac120006,ResourceVersion:216581,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 64465e38-f083-11ea-b060-0242ac120006 0xc001888640 0xc001888641}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018886c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018886e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.940: INFO: Pod "nginx-deployment-85ddf47c5d-58rp7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-58rp7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-58rp7,UID:65a00636-f083-11ea-b060-0242ac120006,ResourceVersion:216595,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001888750 0xc001888751}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018887c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018887e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-06 20:56:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.941: INFO: Pod "nginx-deployment-85ddf47c5d-65h2t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-65h2t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-65h2t,UID:5e3dad2d-f083-11ea-b060-0242ac120006,ResourceVersion:216468,Generation:0,CreationTimestamp:2020-09-06 20:56:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001888890 0xc001888891}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001888900} {node.kubernetes.io/unreachable Exists NoExecute 0xc001888920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:00 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.70,StartTime:2020-09-06 20:56:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-06 20:56:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ccb5332d1ecdccc572b41cb5c21496445a11186dac48c676cf5a43ccba467d0e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.941: INFO: Pod "nginx-deployment-85ddf47c5d-69wnc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-69wnc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-69wnc,UID:5e317c29-f083-11ea-b060-0242ac120006,ResourceVersion:216452,Generation:0,CreationTimestamp:2020-09-06 20:55:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc0018889f0 0xc0018889f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001888a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001888a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:00 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.60,StartTime:2020-09-06 20:56:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-06 20:56:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8cfe297e000e9df58d8d775b454a5bcd37d36fdc15ad3ebac6687bfdc0b54d61}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.941: INFO: Pod "nginx-deployment-85ddf47c5d-9n7cn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9n7cn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-9n7cn,UID:5e318183-f083-11ea-b060-0242ac120006,ResourceVersion:216464,Generation:0,CreationTimestamp:2020-09-06 20:55:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001888b40 0xc001888b41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001888bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001888bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:00 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.69,StartTime:2020-09-06 20:56:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-06 20:56:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cea5cdd6abee04442a8e99d20a97a24caeae820e7b93bcde44c8a7a0720b7cb4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.941: INFO: Pod "nginx-deployment-85ddf47c5d-b6cj4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b6cj4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-b6cj4,UID:5e305446-f083-11ea-b060-0242ac120006,ResourceVersion:216418,Generation:0,CreationTimestamp:2020-09-06 20:55:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001888c90 0xc001888c91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001888d00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001888d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:55:59 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.57,StartTime:2020-09-06 20:56:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-06 20:56:05 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://62767e31705ba0e53081ab9b5c601db67dbeff304f383153197a4018fdd03ca9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.941: INFO: Pod "nginx-deployment-85ddf47c5d-flvvz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-flvvz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-flvvz,UID:65a75779-f083-11ea-b060-0242ac120006,ResourceVersion:216570,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001888de0 0xc001888de1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001888e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001888e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.941: INFO: Pod "nginx-deployment-85ddf47c5d-j6d4w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-j6d4w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-j6d4w,UID:65aef0aa-f083-11ea-b060-0242ac120006,ResourceVersion:216575,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001888ee0 0xc001888ee1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001888f50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001888f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.941: INFO: Pod "nginx-deployment-85ddf47c5d-m4wtk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m4wtk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-m4wtk,UID:5e305730-f083-11ea-b060-0242ac120006,ResourceVersion:216437,Generation:0,CreationTimestamp:2020-09-06 20:55:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001888fe0 0xc001888fe1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001889050} {node.kubernetes.io/unreachable Exists NoExecute 0xc001889070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:55:59 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.68,StartTime:2020-09-06 20:56:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-06 20:56:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://371ae17155c9f956c0c62136358e446eb39335ffcaff96658595e3524b392c33}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.941: INFO: Pod "nginx-deployment-85ddf47c5d-nrpdv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nrpdv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-nrpdv,UID:5e3185af-f083-11ea-b060-0242ac120006,ResourceVersion:216431,Generation:0,CreationTimestamp:2020-09-06 20:55:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001889130 0xc001889131}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018891a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018891c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:00 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.58,StartTime:2020-09-06 20:56:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-06 20:56:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f942f9702225e2a037eac8f033b413f2bab0ec9755f5e4013bda71a5de10c1f4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.941: INFO: Pod "nginx-deployment-85ddf47c5d-pcbxv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pcbxv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-pcbxv,UID:5e3da4bf-f083-11ea-b060-0242ac120006,ResourceVersion:216456,Generation:0,CreationTimestamp:2020-09-06 20:56:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001889280 0xc001889281}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018892f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001889310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:00 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.59,StartTime:2020-09-06 20:56:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-06 20:56:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://54465fe3780bcc69b197ddb4348c7054d19ea530270c82e362c61078a4638179}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.941: INFO: Pod "nginx-deployment-85ddf47c5d-qkt4k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qkt4k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-qkt4k,UID:65a74ab4-f083-11ea-b060-0242ac120006,ResourceVersion:216571,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001889450 0xc001889451}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018894c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018894e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.942: INFO: Pod "nginx-deployment-85ddf47c5d-qwxtm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qwxtm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-qwxtm,UID:65a7514e-f083-11ea-b060-0242ac120006,ResourceVersion:216572,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001889560 0xc001889561}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018896b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018896d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.942: INFO: Pod "nginx-deployment-85ddf47c5d-slqxx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-slqxx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-slqxx,UID:65aefb54-f083-11ea-b060-0242ac120006,ResourceVersion:216584,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001889740 0xc001889741}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018897b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018897d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.942: INFO: Pod "nginx-deployment-85ddf47c5d-tbch2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tbch2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-tbch2,UID:65aeece1-f083-11ea-b060-0242ac120006,ResourceVersion:216580,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001889900 0xc001889901}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001889980} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018899a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.942: INFO: Pod "nginx-deployment-85ddf47c5d-vg7ct" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vg7ct,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-vg7ct,UID:5e2f9bbf-f083-11ea-b060-0242ac120006,ResourceVersion:216461,Generation:0,CreationTimestamp:2020-09-06 20:55:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001889aa0 0xc001889aa1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001889b10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001889b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:55:59 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.71,StartTime:2020-09-06 20:56:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-06 20:56:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f0265ef98933ed9d8f5df57752ed3aa2e05a24442edd834ca545efbe51faa01f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.942: INFO: Pod "nginx-deployment-85ddf47c5d-vv7s9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vv7s9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-vv7s9,UID:65aef90c-f083-11ea-b060-0242ac120006,ResourceVersion:216583,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001889c50 0xc001889c51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001889cc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001889ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.942: INFO: Pod "nginx-deployment-85ddf47c5d-vzqk5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vzqk5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-vzqk5,UID:65a74e55-f083-11ea-b060-0242ac120006,ResourceVersion:216568,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001889d50 0xc001889d51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001889dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001889de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.942: INFO: Pod "nginx-deployment-85ddf47c5d-w9zw8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-w9zw8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-w9zw8,UID:65a011a9-f083-11ea-b060-0242ac120006,ResourceVersion:216549,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001889e50 0xc001889e51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001889ec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001889ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.942: INFO: Pod "nginx-deployment-85ddf47c5d-x56lb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x56lb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-x56lb,UID:65af02c1-f083-11ea-b060-0242ac120006,ResourceVersion:216582,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc001889f50 0xc001889f51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001889fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001889fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 6 20:56:12.942: INFO: Pod "nginx-deployment-85ddf47c5d-zfb72" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zfb72,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-d6h6m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6h6m/pods/nginx-deployment-85ddf47c5d-zfb72,UID:658ba379-f083-11ea-b060-0242ac120006,ResourceVersion:216588,Generation:0,CreationTimestamp:2020-09-06 20:56:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 5e2aff23-f083-11ea-b060-0242ac120006 0xc0020de160 0xc0020de161}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2xg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2xg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vq2xg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020de350} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020de370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 20:56:12 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-06 20:56:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 20:56:12.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-d6h6m" for this suite. Sep 6 20:56:31.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 20:56:31.181: INFO: namespace: e2e-tests-deployment-d6h6m, resource: bindings, ignored listing per whitelist Sep 6 20:56:31.181: INFO: namespace e2e-tests-deployment-d6h6m deletion completed in 18.177534567s • [SLOW TEST:31.370 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 20:56:31.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-k5qdv Sep 6 20:56:35.292: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-k5qdv STEP: checking the pod's current state and verifying that restartCount is present Sep 6 20:56:35.295: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 21:00:35.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-k5qdv" for this suite. Sep 6 21:00:41.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 21:00:41.987: INFO: namespace: e2e-tests-container-probe-k5qdv, resource: bindings, ignored listing per whitelist Sep 6 21:00:42.037: INFO: namespace e2e-tests-container-probe-k5qdv deletion completed in 6.102442206s • [SLOW TEST:250.856 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 21:00:42.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Sep 6 21:00:42.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-m6xtg' Sep 6 21:00:44.489: INFO: stderr: "" Sep 6 21:00:44.489: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Sep 6 21:00:49.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-m6xtg -o json' Sep 6 21:00:49.645: INFO: stderr: "" Sep 6 21:00:49.645: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-06T21:00:44Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-m6xtg\",\n \"resourceVersion\": \"217360\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-m6xtg/pods/e2e-test-nginx-pod\",\n \"uid\": \"07c240db-f084-11ea-b060-0242ac120006\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-5fvs8\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-5fvs8\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-5fvs8\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-06T21:00:44Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-06T21:00:47Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-06T21:00:47Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-06T21:00:44Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a6fdea0ac2d576358bb5f31d7715791d7886b703834d38fb555f204a460a97a8\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-09-06T21:00:46Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.7\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.86\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-09-06T21:00:44Z\"\n }\n}\n" STEP: replace the image in the pod Sep 6 21:00:49.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-m6xtg' Sep 6 21:00:49.936: INFO: stderr: "" Sep 6 21:00:49.936: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Sep 6 21:00:49.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-m6xtg' Sep 6 21:01:00.070: INFO: stderr: "" Sep 6 21:01:00.070: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 21:01:00.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-m6xtg" for this suite. Sep 6 21:01:06.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 21:01:06.152: INFO: namespace: e2e-tests-kubectl-m6xtg, resource: bindings, ignored listing per whitelist Sep 6 21:01:06.158: INFO: namespace e2e-tests-kubectl-m6xtg deletion completed in 6.085222185s • [SLOW TEST:24.120 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 21:01:06.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-jhbq STEP: Creating a pod to test atomic-volume-subpath Sep 6 21:01:06.331: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jhbq" in namespace "e2e-tests-subpath-zxg2f" to be "success or failure" Sep 6 21:01:06.334: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Pending", Reason="", readiness=false. Elapsed: 3.810608ms Sep 6 21:01:08.339: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008505029s Sep 6 21:01:10.378: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047306654s Sep 6 21:01:12.382: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Running", Reason="", readiness=false. Elapsed: 6.051296985s Sep 6 21:01:14.386: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Running", Reason="", readiness=false. Elapsed: 8.055442508s Sep 6 21:01:16.390: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Running", Reason="", readiness=false. Elapsed: 10.059686585s Sep 6 21:01:18.395: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Running", Reason="", readiness=false. Elapsed: 12.064115683s Sep 6 21:01:20.399: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Running", Reason="", readiness=false. Elapsed: 14.068650073s Sep 6 21:01:22.403: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Running", Reason="", readiness=false. Elapsed: 16.072154241s Sep 6 21:01:24.407: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Running", Reason="", readiness=false. Elapsed: 18.07614318s Sep 6 21:01:26.411: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Running", Reason="", readiness=false. Elapsed: 20.080656443s Sep 6 21:01:28.415: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Running", Reason="", readiness=false. Elapsed: 22.084461047s Sep 6 21:01:30.418: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Running", Reason="", readiness=false. Elapsed: 24.087739245s Sep 6 21:01:32.423: INFO: Pod "pod-subpath-test-configmap-jhbq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.092031365s STEP: Saw pod success Sep 6 21:01:32.423: INFO: Pod "pod-subpath-test-configmap-jhbq" satisfied condition "success or failure" Sep 6 21:01:32.426: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-jhbq container test-container-subpath-configmap-jhbq: STEP: delete the pod Sep 6 21:01:32.493: INFO: Waiting for pod pod-subpath-test-configmap-jhbq to disappear Sep 6 21:01:32.504: INFO: Pod pod-subpath-test-configmap-jhbq no longer exists STEP: Deleting pod pod-subpath-test-configmap-jhbq Sep 6 21:01:32.504: INFO: Deleting pod "pod-subpath-test-configmap-jhbq" in namespace "e2e-tests-subpath-zxg2f" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 21:01:32.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-zxg2f" for this suite. Sep 6 21:01:38.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 21:01:38.548: INFO: namespace: e2e-tests-subpath-zxg2f, resource: bindings, ignored listing per whitelist Sep 6 21:01:38.602: INFO: namespace e2e-tests-subpath-zxg2f deletion completed in 6.092377764s • [SLOW TEST:32.444 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 21:01:38.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Sep 6 21:01:43.268: INFO: Successfully updated pod "annotationupdate28140c53-f084-11ea-b72c-0242ac110008" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 21:01:45.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-89whk" for this suite. Sep 6 21:02:07.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 21:02:07.370: INFO: namespace: e2e-tests-downward-api-89whk, resource: bindings, ignored listing per whitelist Sep 6 21:02:07.394: INFO: namespace e2e-tests-downward-api-89whk deletion completed in 22.087553444s • [SLOW TEST:28.792 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 21:02:07.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-3942dcb1-f084-11ea-b72c-0242ac110008 STEP: Creating configMap with name cm-test-opt-upd-3942dd48-f084-11ea-b72c-0242ac110008 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-3942dcb1-f084-11ea-b72c-0242ac110008 STEP: Updating configmap cm-test-opt-upd-3942dd48-f084-11ea-b72c-0242ac110008 STEP: Creating configMap with name cm-test-opt-create-3942dd6f-f084-11ea-b72c-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 6 21:02:17.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ld2fx" for this suite. Sep 6 21:02:41.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 6 21:02:41.768: INFO: namespace: e2e-tests-projected-ld2fx, resource: bindings, ignored listing per whitelist Sep 6 21:02:41.770: INFO: namespace e2e-tests-projected-ld2fx deletion completed in 24.092644566s • [SLOW TEST:34.376 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 6 21:02:41.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 6 21:02:41.948: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-518ae3a6-f084-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume configMaps
Sep  6 21:02:48.293: INFO: Waiting up to 5m0s for pod "pod-configmaps-518b9491-f084-11ea-b72c-0242ac110008" in namespace "e2e-tests-configmap-kh5v6" to be "success or failure"
Sep  6 21:02:48.335: INFO: Pod "pod-configmaps-518b9491-f084-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 42.267601ms
Sep  6 21:02:50.339: INFO: Pod "pod-configmaps-518b9491-f084-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046002839s
Sep  6 21:02:52.343: INFO: Pod "pod-configmaps-518b9491-f084-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050097906s
STEP: Saw pod success
Sep  6 21:02:52.343: INFO: Pod "pod-configmaps-518b9491-f084-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:02:52.346: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-518b9491-f084-11ea-b72c-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Sep  6 21:02:52.368: INFO: Waiting for pod pod-configmaps-518b9491-f084-11ea-b72c-0242ac110008 to disappear
Sep  6 21:02:52.371: INFO: Pod pod-configmaps-518b9491-f084-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:02:52.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-kh5v6" for this suite.
Sep  6 21:02:58.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:02:58.412: INFO: namespace: e2e-tests-configmap-kh5v6, resource: bindings, ignored listing per whitelist
Sep  6 21:02:58.458: INFO: namespace e2e-tests-configmap-kh5v6 deletion completed in 6.08448362s

• [SLOW TEST:10.331 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:02:58.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:03:02.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-b6grc" for this suite.
Sep  6 21:03:08.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:03:08.816: INFO: namespace: e2e-tests-emptydir-wrapper-b6grc, resource: bindings, ignored listing per whitelist
Sep  6 21:03:08.824: INFO: namespace e2e-tests-emptydir-wrapper-b6grc deletion completed in 6.098299248s

• [SLOW TEST:10.366 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:03:08.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-5ddffd62-f084-11ea-b72c-0242ac110008
STEP: Creating secret with name s-test-opt-upd-5ddffddc-f084-11ea-b72c-0242ac110008
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5ddffd62-f084-11ea-b72c-0242ac110008
STEP: Updating secret s-test-opt-upd-5ddffddc-f084-11ea-b72c-0242ac110008
STEP: Creating secret with name s-test-opt-create-5ddffe33-f084-11ea-b72c-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:04:33.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-k8xz9" for this suite.
Sep  6 21:04:55.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:04:55.468: INFO: namespace: e2e-tests-secrets-k8xz9, resource: bindings, ignored listing per whitelist
Sep  6 21:04:55.535: INFO: namespace e2e-tests-secrets-k8xz9 deletion completed in 22.095306242s

• [SLOW TEST:106.711 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:04:55.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep  6 21:04:55.665: INFO: Waiting up to 5m0s for pod "pod-9d7872a0-f084-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-k9pcr" to be "success or failure"
Sep  6 21:04:55.671: INFO: Pod "pod-9d7872a0-f084-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03482ms
Sep  6 21:04:57.725: INFO: Pod "pod-9d7872a0-f084-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060447628s
Sep  6 21:04:59.729: INFO: Pod "pod-9d7872a0-f084-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064247772s
STEP: Saw pod success
Sep  6 21:04:59.729: INFO: Pod "pod-9d7872a0-f084-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:04:59.732: INFO: Trying to get logs from node hunter-worker pod pod-9d7872a0-f084-11ea-b72c-0242ac110008 container test-container: 
STEP: delete the pod
Sep  6 21:04:59.811: INFO: Waiting for pod pod-9d7872a0-f084-11ea-b72c-0242ac110008 to disappear
Sep  6 21:04:59.815: INFO: Pod pod-9d7872a0-f084-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:04:59.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k9pcr" for this suite.
Sep  6 21:05:05.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:05:05.879: INFO: namespace: e2e-tests-emptydir-k9pcr, resource: bindings, ignored listing per whitelist
Sep  6 21:05:05.920: INFO: namespace e2e-tests-emptydir-k9pcr deletion completed in 6.10203927s

• [SLOW TEST:10.385 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:05:05.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Sep  6 21:05:06.035: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-7bzps,SelfLink:/api/v1/namespaces/e2e-tests-watch-7bzps/configmaps/e2e-watch-test-label-changed,UID:a3a65645-f084-11ea-b060-0242ac120006,ResourceVersion:218125,Generation:0,CreationTimestamp:2020-09-06 21:05:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep  6 21:05:06.035: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-7bzps,SelfLink:/api/v1/namespaces/e2e-tests-watch-7bzps/configmaps/e2e-watch-test-label-changed,UID:a3a65645-f084-11ea-b060-0242ac120006,ResourceVersion:218126,Generation:0,CreationTimestamp:2020-09-06 21:05:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Sep  6 21:05:06.035: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-7bzps,SelfLink:/api/v1/namespaces/e2e-tests-watch-7bzps/configmaps/e2e-watch-test-label-changed,UID:a3a65645-f084-11ea-b060-0242ac120006,ResourceVersion:218127,Generation:0,CreationTimestamp:2020-09-06 21:05:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Sep  6 21:05:16.058: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-7bzps,SelfLink:/api/v1/namespaces/e2e-tests-watch-7bzps/configmaps/e2e-watch-test-label-changed,UID:a3a65645-f084-11ea-b060-0242ac120006,ResourceVersion:218148,Generation:0,CreationTimestamp:2020-09-06 21:05:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep  6 21:05:16.058: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-7bzps,SelfLink:/api/v1/namespaces/e2e-tests-watch-7bzps/configmaps/e2e-watch-test-label-changed,UID:a3a65645-f084-11ea-b060-0242ac120006,ResourceVersion:218149,Generation:0,CreationTimestamp:2020-09-06 21:05:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Sep  6 21:05:16.058: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-7bzps,SelfLink:/api/v1/namespaces/e2e-tests-watch-7bzps/configmaps/e2e-watch-test-label-changed,UID:a3a65645-f084-11ea-b060-0242ac120006,ResourceVersion:218150,Generation:0,CreationTimestamp:2020-09-06 21:05:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:05:16.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-7bzps" for this suite.
Sep  6 21:05:22.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:05:22.193: INFO: namespace: e2e-tests-watch-7bzps, resource: bindings, ignored listing per whitelist
Sep  6 21:05:22.198: INFO: namespace e2e-tests-watch-7bzps deletion completed in 6.13582391s

• [SLOW TEST:16.278 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:05:22.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  6 21:05:22.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Sep  6 21:05:22.356: INFO: stderr: ""
Sep  6 21:05:22.356: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-09-06T19:06:33Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Sep  6 21:05:22.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8q5r2'
Sep  6 21:05:22.618: INFO: stderr: ""
Sep  6 21:05:22.618: INFO: stdout: "replicationcontroller/redis-master created\n"
Sep  6 21:05:22.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8q5r2'
Sep  6 21:05:22.880: INFO: stderr: ""
Sep  6 21:05:22.880: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Sep  6 21:05:23.903: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 21:05:23.903: INFO: Found 0 / 1
Sep  6 21:05:24.885: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 21:05:24.885: INFO: Found 0 / 1
Sep  6 21:05:25.884: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 21:05:25.884: INFO: Found 0 / 1
Sep  6 21:05:26.885: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 21:05:26.885: INFO: Found 1 / 1
Sep  6 21:05:26.885: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Sep  6 21:05:26.889: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 21:05:26.889: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep  6 21:05:26.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-6rc4z --namespace=e2e-tests-kubectl-8q5r2'
Sep  6 21:05:27.006: INFO: stderr: ""
Sep  6 21:05:27.006: INFO: stdout: "Name:               redis-master-6rc4z\nNamespace:          e2e-tests-kubectl-8q5r2\nPriority:           0\nPriorityClassName:  \nNode:               hunter-worker2/172.18.0.7\nStart Time:         Sun, 06 Sep 2020 21:05:22 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        \nStatus:             Running\nIP:                 10.244.2.89\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://94bca17bc44b0b66aa5497c058e6c1353dd7d2da846ee2b596063cc4120d90e0\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 06 Sep 2020 21:05:25 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bqnqg (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-bqnqg:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-bqnqg\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                     Message\n  ----    ------     ----  ----                     -------\n  Normal  Scheduled  5s    default-scheduler        Successfully assigned e2e-tests-kubectl-8q5r2/redis-master-6rc4z to hunter-worker2\n  Normal  Pulled     4s    kubelet, hunter-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, hunter-worker2  Created container\n  Normal  Started    2s    kubelet, hunter-worker2  Started container\n"
Sep  6 21:05:27.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-8q5r2'
Sep  6 21:05:27.133: INFO: stderr: ""
Sep  6 21:05:27.133: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-8q5r2\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: redis-master-6rc4z\n"
Sep  6 21:05:27.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-8q5r2'
Sep  6 21:05:27.245: INFO: stderr: ""
Sep  6 21:05:27.245: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-8q5r2\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.107.90.150\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.2.89:6379\nSession Affinity:  None\nEvents:            \n"
Sep  6 21:05:27.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane'
Sep  6 21:05:27.377: INFO: stderr: ""
Sep  6 21:05:27.377: INFO: stdout: "Name:               hunter-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=hunter-control-plane\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 05 Sep 2020 13:36:48 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sun, 06 Sep 2020 21:05:22 +0000   Sat, 05 Sep 2020 13:36:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sun, 06 Sep 2020 21:05:22 +0000   Sat, 05 Sep 2020 13:36:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sun, 06 Sep 2020 21:05:22 +0000   Sat, 05 Sep 2020 13:36:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sun, 06 Sep 2020 21:05:22 +0000   Sat, 05 Sep 2020 13:37:39 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.6\n  Hostname:    hunter-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nSystem Info:\n Machine ID:                 44138625b7954241b3c3f092d0954773\n System UUID:                fca70277-c2bb-4584-a99b-46841510eb2f\n Boot ID:                    16f80d7c-7741-4040-9735-0d166ad57c21\n Kernel Version:             4.15.0-115-generic\n OS Image:                   Ubuntu 19.10\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.3.3-14-g449e9269\n Kubelet Version:            v1.13.12\n Kube-Proxy Version:         v1.13.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                            ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-54ff9cd656-gv2l2                        100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     31h\n  kube-system                coredns-54ff9cd656-t76vb                        100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     31h\n  kube-system                etcd-hunter-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         31h\n  kube-system                kindnet-78dfs                                   100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      31h\n  kube-system                kube-apiserver-hunter-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         31h\n  kube-system                kube-controller-manager-hunter-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         31h\n  kube-system                kube-proxy-qmxds                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         31h\n  kube-system                kube-scheduler-hunter-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         31h\n  local-path-storage         local-path-provisioner-674595c7-lmd9b           0 (0%)        0 (0%)      0 (0%)           0 (0%)         31h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Sep  6 21:05:27.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-8q5r2'
Sep  6 21:05:27.494: INFO: stderr: ""
Sep  6 21:05:27.494: INFO: stdout: "Name:         e2e-tests-kubectl-8q5r2\nLabels:       e2e-framework=kubectl\n              e2e-run=63905285-f07d-11ea-b72c-0242ac110008\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:05:27.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8q5r2" for this suite.
Sep  6 21:05:49.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:05:49.582: INFO: namespace: e2e-tests-kubectl-8q5r2, resource: bindings, ignored listing per whitelist
Sep  6 21:05:49.587: INFO: namespace e2e-tests-kubectl-8q5r2 deletion completed in 22.088959036s

• [SLOW TEST:27.388 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:05:49.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Sep  6 21:05:49.760: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-4zztq,SelfLink:/api/v1/namespaces/e2e-tests-watch-4zztq/configmaps/e2e-watch-test-resource-version,UID:bdafea0f-f084-11ea-b060-0242ac120006,ResourceVersion:218267,Generation:0,CreationTimestamp:2020-09-06 21:05:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep  6 21:05:49.760: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-4zztq,SelfLink:/api/v1/namespaces/e2e-tests-watch-4zztq/configmaps/e2e-watch-test-resource-version,UID:bdafea0f-f084-11ea-b060-0242ac120006,ResourceVersion:218268,Generation:0,CreationTimestamp:2020-09-06 21:05:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:05:49.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-4zztq" for this suite.
Sep  6 21:05:55.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:05:55.845: INFO: namespace: e2e-tests-watch-4zztq, resource: bindings, ignored listing per whitelist
Sep  6 21:05:55.870: INFO: namespace e2e-tests-watch-4zztq deletion completed in 6.105915051s

• [SLOW TEST:6.283 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:05:55.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Sep  6 21:06:04.032: INFO: 9 pods remaining
Sep  6 21:06:04.032: INFO: 1 pods has nil DeletionTimestamp
Sep  6 21:06:04.032: INFO: 
STEP: Gathering metrics
W0906 21:06:04.967065       7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep  6 21:06:04.967: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:06:04.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-lh4f7" for this suite.
Sep  6 21:06:11.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:06:11.085: INFO: namespace: e2e-tests-gc-lh4f7, resource: bindings, ignored listing per whitelist
Sep  6 21:06:11.133: INFO: namespace e2e-tests-gc-lh4f7 deletion completed in 6.163447727s

• [SLOW TEST:15.263 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:06:11.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Sep  6 21:06:11.280: INFO: Waiting up to 5m0s for pod "client-containers-ca8b7694-f084-11ea-b72c-0242ac110008" in namespace "e2e-tests-containers-hr9hd" to be "success or failure"
Sep  6 21:06:11.285: INFO: Pod "client-containers-ca8b7694-f084-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.05528ms
Sep  6 21:06:13.289: INFO: Pod "client-containers-ca8b7694-f084-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008847119s
Sep  6 21:06:15.293: INFO: Pod "client-containers-ca8b7694-f084-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013250394s
STEP: Saw pod success
Sep  6 21:06:15.293: INFO: Pod "client-containers-ca8b7694-f084-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:06:15.296: INFO: Trying to get logs from node hunter-worker pod client-containers-ca8b7694-f084-11ea-b72c-0242ac110008 container test-container: 
STEP: delete the pod
Sep  6 21:06:15.317: INFO: Waiting for pod client-containers-ca8b7694-f084-11ea-b72c-0242ac110008 to disappear
Sep  6 21:06:15.321: INFO: Pod client-containers-ca8b7694-f084-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:06:15.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-hr9hd" for this suite.
Sep  6 21:06:21.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:06:21.410: INFO: namespace: e2e-tests-containers-hr9hd, resource: bindings, ignored listing per whitelist
Sep  6 21:06:21.460: INFO: namespace e2e-tests-containers-hr9hd deletion completed in 6.135149079s

• [SLOW TEST:10.327 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:06:21.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-d0b05a50-f084-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume configMaps
Sep  6 21:06:21.598: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d0b24fdf-f084-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-cqjtl" to be "success or failure"
Sep  6 21:06:21.622: INFO: Pod "pod-projected-configmaps-d0b24fdf-f084-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 23.451556ms
Sep  6 21:06:23.626: INFO: Pod "pod-projected-configmaps-d0b24fdf-f084-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027518276s
Sep  6 21:06:25.630: INFO: Pod "pod-projected-configmaps-d0b24fdf-f084-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031737191s
STEP: Saw pod success
Sep  6 21:06:25.630: INFO: Pod "pod-projected-configmaps-d0b24fdf-f084-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:06:25.633: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-d0b24fdf-f084-11ea-b72c-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  6 21:06:25.650: INFO: Waiting for pod pod-projected-configmaps-d0b24fdf-f084-11ea-b72c-0242ac110008 to disappear
Sep  6 21:06:25.706: INFO: Pod pod-projected-configmaps-d0b24fdf-f084-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:06:25.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cqjtl" for this suite.
Sep  6 21:06:31.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:06:31.748: INFO: namespace: e2e-tests-projected-cqjtl, resource: bindings, ignored listing per whitelist
Sep  6 21:06:31.809: INFO: namespace e2e-tests-projected-cqjtl deletion completed in 6.099765724s

• [SLOW TEST:10.349 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:06:31.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  6 21:06:31.981: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6df1f29-f084-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-xchnk" to be "success or failure"
Sep  6 21:06:31.985: INFO: Pod "downwardapi-volume-d6df1f29-f084-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.607236ms
Sep  6 21:06:33.989: INFO: Pod "downwardapi-volume-d6df1f29-f084-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007863324s
Sep  6 21:06:35.993: INFO: Pod "downwardapi-volume-d6df1f29-f084-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011969383s
STEP: Saw pod success
Sep  6 21:06:35.993: INFO: Pod "downwardapi-volume-d6df1f29-f084-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:06:35.996: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-d6df1f29-f084-11ea-b72c-0242ac110008 container client-container: 
STEP: delete the pod
Sep  6 21:06:36.034: INFO: Waiting for pod downwardapi-volume-d6df1f29-f084-11ea-b72c-0242ac110008 to disappear
Sep  6 21:06:36.052: INFO: Pod downwardapi-volume-d6df1f29-f084-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:06:36.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xchnk" for this suite.
Sep  6 21:06:42.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:06:42.121: INFO: namespace: e2e-tests-downward-api-xchnk, resource: bindings, ignored listing per whitelist
Sep  6 21:06:42.141: INFO: namespace e2e-tests-downward-api-xchnk deletion completed in 6.086116808s

• [SLOW TEST:10.332 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:06:42.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-6hd4
STEP: Creating a pod to test atomic-volume-subpath
Sep  6 21:06:42.276: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-6hd4" in namespace "e2e-tests-subpath-hwvj7" to be "success or failure"
Sep  6 21:06:42.294: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.804699ms
Sep  6 21:06:44.325: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048447414s
Sep  6 21:06:46.382: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106079553s
Sep  6 21:06:48.388: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Running", Reason="", readiness=true. Elapsed: 6.112169589s
Sep  6 21:06:50.392: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Running", Reason="", readiness=false. Elapsed: 8.116024339s
Sep  6 21:06:52.397: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Running", Reason="", readiness=false. Elapsed: 10.120268062s
Sep  6 21:06:54.401: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Running", Reason="", readiness=false. Elapsed: 12.124490942s
Sep  6 21:06:56.405: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Running", Reason="", readiness=false. Elapsed: 14.128618781s
Sep  6 21:06:58.409: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Running", Reason="", readiness=false. Elapsed: 16.132629272s
Sep  6 21:07:00.425: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Running", Reason="", readiness=false. Elapsed: 18.148356687s
Sep  6 21:07:02.429: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Running", Reason="", readiness=false. Elapsed: 20.15242781s
Sep  6 21:07:04.433: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Running", Reason="", readiness=false. Elapsed: 22.156221146s
Sep  6 21:07:06.437: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Running", Reason="", readiness=false. Elapsed: 24.160223857s
Sep  6 21:07:08.441: INFO: Pod "pod-subpath-test-downwardapi-6hd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.164334905s
STEP: Saw pod success
Sep  6 21:07:08.441: INFO: Pod "pod-subpath-test-downwardapi-6hd4" satisfied condition "success or failure"
Sep  6 21:07:08.443: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-6hd4 container test-container-subpath-downwardapi-6hd4: 
STEP: delete the pod
Sep  6 21:07:08.498: INFO: Waiting for pod pod-subpath-test-downwardapi-6hd4 to disappear
Sep  6 21:07:08.513: INFO: Pod pod-subpath-test-downwardapi-6hd4 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-6hd4
Sep  6 21:07:08.513: INFO: Deleting pod "pod-subpath-test-downwardapi-6hd4" in namespace "e2e-tests-subpath-hwvj7"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:07:08.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-hwvj7" for this suite.
Sep  6 21:07:14.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:07:14.598: INFO: namespace: e2e-tests-subpath-hwvj7, resource: bindings, ignored listing per whitelist
Sep  6 21:07:14.646: INFO: namespace e2e-tests-subpath-hwvj7 deletion completed in 6.12748929s

• [SLOW TEST:32.505 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:07:14.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  6 21:07:14.778: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0641561-f084-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-9w928" to be "success or failure"
Sep  6 21:07:14.800: INFO: Pod "downwardapi-volume-f0641561-f084-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.521551ms
Sep  6 21:07:16.803: INFO: Pod "downwardapi-volume-f0641561-f084-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025171256s
Sep  6 21:07:18.808: INFO: Pod "downwardapi-volume-f0641561-f084-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029691037s
STEP: Saw pod success
Sep  6 21:07:18.808: INFO: Pod "downwardapi-volume-f0641561-f084-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:07:18.811: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f0641561-f084-11ea-b72c-0242ac110008 container client-container: 
STEP: delete the pod
Sep  6 21:07:18.851: INFO: Waiting for pod downwardapi-volume-f0641561-f084-11ea-b72c-0242ac110008 to disappear
Sep  6 21:07:18.892: INFO: Pod downwardapi-volume-f0641561-f084-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:07:18.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9w928" for this suite.
Sep  6 21:07:24.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:07:24.977: INFO: namespace: e2e-tests-projected-9w928, resource: bindings, ignored listing per whitelist
Sep  6 21:07:24.995: INFO: namespace e2e-tests-projected-9w928 deletion completed in 6.099988663s

• [SLOW TEST:10.349 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:07:24.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Sep  6 21:07:25.121: INFO: namespace e2e-tests-kubectl-blghh
Sep  6 21:07:25.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-blghh'
Sep  6 21:07:25.390: INFO: stderr: ""
Sep  6 21:07:25.390: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Sep  6 21:07:26.394: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 21:07:26.394: INFO: Found 0 / 1
Sep  6 21:07:27.395: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 21:07:27.395: INFO: Found 0 / 1
Sep  6 21:07:28.394: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 21:07:28.394: INFO: Found 0 / 1
Sep  6 21:07:29.394: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 21:07:29.394: INFO: Found 1 / 1
Sep  6 21:07:29.395: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Sep  6 21:07:29.398: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 21:07:29.398: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep  6 21:07:29.398: INFO: wait on redis-master startup in e2e-tests-kubectl-blghh 
Sep  6 21:07:29.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kvrdz redis-master --namespace=e2e-tests-kubectl-blghh'
Sep  6 21:07:29.528: INFO: stderr: ""
Sep  6 21:07:29.528: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 06 Sep 21:07:27.845 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Sep 21:07:27.845 # Server started, Redis version 3.2.12\n1:M 06 Sep 21:07:27.845 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Sep 21:07:27.845 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Sep  6 21:07:29.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-blghh'
Sep  6 21:07:29.664: INFO: stderr: ""
Sep  6 21:07:29.664: INFO: stdout: "service/rm2 exposed\n"
Sep  6 21:07:29.675: INFO: Service rm2 in namespace e2e-tests-kubectl-blghh found.
STEP: exposing service
Sep  6 21:07:31.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-blghh'
Sep  6 21:07:31.836: INFO: stderr: ""
Sep  6 21:07:31.836: INFO: stdout: "service/rm3 exposed\n"
Sep  6 21:07:31.845: INFO: Service rm3 in namespace e2e-tests-kubectl-blghh found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:07:33.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-blghh" for this suite.
Sep  6 21:07:55.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:07:55.929: INFO: namespace: e2e-tests-kubectl-blghh, resource: bindings, ignored listing per whitelist
Sep  6 21:07:55.977: INFO: namespace e2e-tests-kubectl-blghh deletion completed in 22.120597556s

• [SLOW TEST:30.981 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:07:55.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-090b288f-f085-11ea-b72c-0242ac110008
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:08:00.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-z4hn5" for this suite.
Sep  6 21:08:22.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:08:22.240: INFO: namespace: e2e-tests-configmap-z4hn5, resource: bindings, ignored listing per whitelist
Sep  6 21:08:22.251: INFO: namespace e2e-tests-configmap-z4hn5 deletion completed in 22.092685692s

• [SLOW TEST:26.274 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:08:22.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:08:29.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-qdd7v" for this suite.
Sep  6 21:08:51.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:08:51.536: INFO: namespace: e2e-tests-replication-controller-qdd7v, resource: bindings, ignored listing per whitelist
Sep  6 21:08:51.549: INFO: namespace e2e-tests-replication-controller-qdd7v deletion completed in 22.105869556s

• [SLOW TEST:29.298 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:08:51.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-2a377efa-f085-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume secrets
Sep  6 21:08:51.838: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2a3819d5-f085-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-tpxgl" to be "success or failure"
Sep  6 21:08:51.846: INFO: Pod "pod-projected-secrets-2a3819d5-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.611803ms
Sep  6 21:08:53.850: INFO: Pod "pod-projected-secrets-2a3819d5-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01175556s
Sep  6 21:08:55.854: INFO: Pod "pod-projected-secrets-2a3819d5-f085-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015914174s
STEP: Saw pod success
Sep  6 21:08:55.854: INFO: Pod "pod-projected-secrets-2a3819d5-f085-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:08:55.857: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-2a3819d5-f085-11ea-b72c-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Sep  6 21:08:55.870: INFO: Waiting for pod pod-projected-secrets-2a3819d5-f085-11ea-b72c-0242ac110008 to disappear
Sep  6 21:08:55.875: INFO: Pod pod-projected-secrets-2a3819d5-f085-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:08:55.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tpxgl" for this suite.
Sep  6 21:09:01.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:09:01.938: INFO: namespace: e2e-tests-projected-tpxgl, resource: bindings, ignored listing per whitelist
Sep  6 21:09:01.990: INFO: namespace e2e-tests-projected-tpxgl deletion completed in 6.110989205s

• [SLOW TEST:10.440 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:09:01.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep  6 21:09:02.140: INFO: Waiting up to 5m0s for pod "pod-30633290-f085-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-57x4q" to be "success or failure"
Sep  6 21:09:02.171: INFO: Pod "pod-30633290-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 30.824776ms
Sep  6 21:09:04.175: INFO: Pod "pod-30633290-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034356719s
Sep  6 21:09:06.179: INFO: Pod "pod-30633290-f085-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038689251s
STEP: Saw pod success
Sep  6 21:09:06.179: INFO: Pod "pod-30633290-f085-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:09:06.182: INFO: Trying to get logs from node hunter-worker2 pod pod-30633290-f085-11ea-b72c-0242ac110008 container test-container: 
STEP: delete the pod
Sep  6 21:09:06.333: INFO: Waiting for pod pod-30633290-f085-11ea-b72c-0242ac110008 to disappear
Sep  6 21:09:06.342: INFO: Pod pod-30633290-f085-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:09:06.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-57x4q" for this suite.
Sep  6 21:09:12.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:09:12.379: INFO: namespace: e2e-tests-emptydir-57x4q, resource: bindings, ignored listing per whitelist
Sep  6 21:09:12.489: INFO: namespace e2e-tests-emptydir-57x4q deletion completed in 6.144553433s

• [SLOW TEST:10.499 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:09:12.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep  6 21:09:12.620: INFO: Waiting up to 5m0s for pod "pod-369e2645-f085-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-scvxx" to be "success or failure"
Sep  6 21:09:12.630: INFO: Pod "pod-369e2645-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.427489ms
Sep  6 21:09:14.634: INFO: Pod "pod-369e2645-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013065659s
Sep  6 21:09:16.638: INFO: Pod "pod-369e2645-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017353394s
Sep  6 21:09:18.642: INFO: Pod "pod-369e2645-f085-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021581877s
STEP: Saw pod success
Sep  6 21:09:18.642: INFO: Pod "pod-369e2645-f085-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:09:18.645: INFO: Trying to get logs from node hunter-worker pod pod-369e2645-f085-11ea-b72c-0242ac110008 container test-container: 
STEP: delete the pod
Sep  6 21:09:18.676: INFO: Waiting for pod pod-369e2645-f085-11ea-b72c-0242ac110008 to disappear
Sep  6 21:09:18.696: INFO: Pod pod-369e2645-f085-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:09:18.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-scvxx" for this suite.
Sep  6 21:09:24.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:09:24.780: INFO: namespace: e2e-tests-emptydir-scvxx, resource: bindings, ignored listing per whitelist
Sep  6 21:09:24.784: INFO: namespace e2e-tests-emptydir-scvxx deletion completed in 6.085250797s

• [SLOW TEST:12.295 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:09:24.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-5s786
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5s786 to expose endpoints map[]
Sep  6 21:09:24.984: INFO: Get endpoints failed (14.055183ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Sep  6 21:09:25.988: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5s786 exposes endpoints map[] (1.01809402s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-5s786
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5s786 to expose endpoints map[pod1:[80]]
Sep  6 21:09:29.064: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5s786 exposes endpoints map[pod1:[80]] (3.069365462s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-5s786
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5s786 to expose endpoints map[pod1:[80] pod2:[80]]
Sep  6 21:09:32.151: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5s786 exposes endpoints map[pod1:[80] pod2:[80]] (3.083080297s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-5s786
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5s786 to expose endpoints map[pod2:[80]]
Sep  6 21:09:33.196: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5s786 exposes endpoints map[pod2:[80]] (1.041043763s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-5s786
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5s786 to expose endpoints map[]
Sep  6 21:09:34.209: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5s786 exposes endpoints map[] (1.009113349s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:09:34.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-5s786" for this suite.
Sep  6 21:09:56.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:09:56.295: INFO: namespace: e2e-tests-services-5s786, resource: bindings, ignored listing per whitelist
Sep  6 21:09:56.320: INFO: namespace e2e-tests-services-5s786 deletion completed in 22.084501387s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:31.536 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:09:56.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-6p5s2
I0906 21:09:56.451942       7 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-6p5s2, replica count: 1
I0906 21:09:57.502402       7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 21:09:58.502670       7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 21:09:59.502899       7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep  6 21:09:59.645: INFO: Created: latency-svc-dhgjk
Sep  6 21:09:59.659: INFO: Got endpoints: latency-svc-dhgjk [56.22044ms]
Sep  6 21:09:59.693: INFO: Created: latency-svc-82bzs
Sep  6 21:09:59.710: INFO: Got endpoints: latency-svc-82bzs [50.498415ms]
Sep  6 21:09:59.735: INFO: Created: latency-svc-8l6gw
Sep  6 21:09:59.784: INFO: Got endpoints: latency-svc-8l6gw [124.681283ms]
Sep  6 21:09:59.820: INFO: Created: latency-svc-5nwfx
Sep  6 21:09:59.836: INFO: Got endpoints: latency-svc-5nwfx [177.000692ms]
Sep  6 21:09:59.860: INFO: Created: latency-svc-64487
Sep  6 21:09:59.873: INFO: Got endpoints: latency-svc-64487 [213.496511ms]
Sep  6 21:09:59.907: INFO: Created: latency-svc-z9vl6
Sep  6 21:09:59.913: INFO: Got endpoints: latency-svc-z9vl6 [254.11155ms]
Sep  6 21:09:59.950: INFO: Created: latency-svc-2prpz
Sep  6 21:09:59.968: INFO: Got endpoints: latency-svc-2prpz [308.706528ms]
Sep  6 21:09:59.993: INFO: Created: latency-svc-l9j9g
Sep  6 21:10:00.068: INFO: Got endpoints: latency-svc-l9j9g [409.16533ms]
Sep  6 21:10:00.078: INFO: Created: latency-svc-4jdpg
Sep  6 21:10:00.095: INFO: Got endpoints: latency-svc-4jdpg [435.541088ms]
Sep  6 21:10:00.114: INFO: Created: latency-svc-kh5jb
Sep  6 21:10:00.131: INFO: Got endpoints: latency-svc-kh5jb [471.425029ms]
Sep  6 21:10:00.157: INFO: Created: latency-svc-hgcdb
Sep  6 21:10:00.237: INFO: Got endpoints: latency-svc-hgcdb [577.409426ms]
Sep  6 21:10:00.240: INFO: Created: latency-svc-w55jw
Sep  6 21:10:00.287: INFO: Got endpoints: latency-svc-w55jw [627.869566ms]
Sep  6 21:10:00.323: INFO: Created: latency-svc-cvnht
Sep  6 21:10:00.398: INFO: Got endpoints: latency-svc-cvnht [738.278177ms]
Sep  6 21:10:00.424: INFO: Created: latency-svc-zqqxb
Sep  6 21:10:00.454: INFO: Got endpoints: latency-svc-zqqxb [794.434873ms]
Sep  6 21:10:00.496: INFO: Created: latency-svc-fbtc2
Sep  6 21:10:00.566: INFO: Got endpoints: latency-svc-fbtc2 [906.33723ms]
Sep  6 21:10:00.587: INFO: Created: latency-svc-mjwq6
Sep  6 21:10:00.617: INFO: Got endpoints: latency-svc-mjwq6 [958.140754ms]
Sep  6 21:10:00.644: INFO: Created: latency-svc-kf4hg
Sep  6 21:10:00.653: INFO: Got endpoints: latency-svc-kf4hg [943.470703ms]
Sep  6 21:10:00.716: INFO: Created: latency-svc-cskvk
Sep  6 21:10:00.719: INFO: Got endpoints: latency-svc-cskvk [935.451807ms]
Sep  6 21:10:00.754: INFO: Created: latency-svc-pjs4h
Sep  6 21:10:00.763: INFO: Got endpoints: latency-svc-pjs4h [926.552136ms]
Sep  6 21:10:00.803: INFO: Created: latency-svc-pv5sh
Sep  6 21:10:00.859: INFO: Got endpoints: latency-svc-pv5sh [986.832842ms]
Sep  6 21:10:00.862: INFO: Created: latency-svc-8lvdt
Sep  6 21:10:00.871: INFO: Got endpoints: latency-svc-8lvdt [957.314103ms]
Sep  6 21:10:00.898: INFO: Created: latency-svc-ww2wn
Sep  6 21:10:00.913: INFO: Got endpoints: latency-svc-ww2wn [944.674695ms]
Sep  6 21:10:00.940: INFO: Created: latency-svc-jcmjx
Sep  6 21:10:00.949: INFO: Got endpoints: latency-svc-jcmjx [880.628781ms]
Sep  6 21:10:01.027: INFO: Created: latency-svc-mtqv4
Sep  6 21:10:01.030: INFO: Got endpoints: latency-svc-mtqv4 [935.434341ms]
Sep  6 21:10:01.061: INFO: Created: latency-svc-tm5fc
Sep  6 21:10:01.091: INFO: Got endpoints: latency-svc-tm5fc [960.607411ms]
Sep  6 21:10:01.126: INFO: Created: latency-svc-zv8k2
Sep  6 21:10:01.189: INFO: Got endpoints: latency-svc-zv8k2 [951.94018ms]
Sep  6 21:10:01.222: INFO: Created: latency-svc-8hhtn
Sep  6 21:10:01.232: INFO: Got endpoints: latency-svc-8hhtn [944.678453ms]
Sep  6 21:10:01.259: INFO: Created: latency-svc-ksnrc
Sep  6 21:10:01.274: INFO: Got endpoints: latency-svc-ksnrc [876.341187ms]
Sep  6 21:10:01.392: INFO: Created: latency-svc-9gdsk
Sep  6 21:10:01.396: INFO: Got endpoints: latency-svc-9gdsk [942.299005ms]
Sep  6 21:10:01.437: INFO: Created: latency-svc-9pgqs
Sep  6 21:10:01.454: INFO: Got endpoints: latency-svc-9pgqs [888.518204ms]
Sep  6 21:10:01.573: INFO: Created: latency-svc-h9kvf
Sep  6 21:10:01.576: INFO: Got endpoints: latency-svc-h9kvf [958.737281ms]
Sep  6 21:10:01.631: INFO: Created: latency-svc-pkf4b
Sep  6 21:10:01.640: INFO: Got endpoints: latency-svc-pkf4b [986.874699ms]
Sep  6 21:10:01.794: INFO: Created: latency-svc-26466
Sep  6 21:10:01.821: INFO: Got endpoints: latency-svc-26466 [1.101391246s]
Sep  6 21:10:01.841: INFO: Created: latency-svc-8xv7c
Sep  6 21:10:01.857: INFO: Got endpoints: latency-svc-8xv7c [1.094072276s]
Sep  6 21:10:01.876: INFO: Created: latency-svc-gx5pf
Sep  6 21:10:01.887: INFO: Got endpoints: latency-svc-gx5pf [1.027037979s]
Sep  6 21:10:01.973: INFO: Created: latency-svc-vqfmm
Sep  6 21:10:01.977: INFO: Got endpoints: latency-svc-vqfmm [1.106270541s]
Sep  6 21:10:02.013: INFO: Created: latency-svc-79zfh
Sep  6 21:10:02.043: INFO: Got endpoints: latency-svc-79zfh [1.129928017s]
Sep  6 21:10:02.146: INFO: Created: latency-svc-cr9cv
Sep  6 21:10:02.150: INFO: Got endpoints: latency-svc-cr9cv [1.201235368s]
Sep  6 21:10:02.182: INFO: Created: latency-svc-dh65v
Sep  6 21:10:02.193: INFO: Got endpoints: latency-svc-dh65v [1.162746298s]
Sep  6 21:10:02.217: INFO: Created: latency-svc-wfk5t
Sep  6 21:10:02.326: INFO: Got endpoints: latency-svc-wfk5t [1.234827616s]
Sep  6 21:10:02.330: INFO: Created: latency-svc-hg96h
Sep  6 21:10:02.338: INFO: Got endpoints: latency-svc-hg96h [1.149376986s]
Sep  6 21:10:02.380: INFO: Created: latency-svc-v7sdj
Sep  6 21:10:02.398: INFO: Got endpoints: latency-svc-v7sdj [1.166565338s]
Sep  6 21:10:02.422: INFO: Created: latency-svc-nzkrq
Sep  6 21:10:02.499: INFO: Got endpoints: latency-svc-nzkrq [1.22485704s]
Sep  6 21:10:02.501: INFO: Created: latency-svc-rmdhh
Sep  6 21:10:02.507: INFO: Got endpoints: latency-svc-rmdhh [1.11045953s]
Sep  6 21:10:02.547: INFO: Created: latency-svc-6qggq
Sep  6 21:10:02.589: INFO: Got endpoints: latency-svc-6qggq [1.134562343s]
Sep  6 21:10:02.644: INFO: Created: latency-svc-bq9mm
Sep  6 21:10:02.651: INFO: Got endpoints: latency-svc-bq9mm [1.07455199s]
Sep  6 21:10:02.674: INFO: Created: latency-svc-cgdmf
Sep  6 21:10:02.699: INFO: Got endpoints: latency-svc-cgdmf [1.059138052s]
Sep  6 21:10:02.741: INFO: Created: latency-svc-h7fxj
Sep  6 21:10:02.817: INFO: Got endpoints: latency-svc-h7fxj [995.788692ms]
Sep  6 21:10:02.818: INFO: Created: latency-svc-8txc9
Sep  6 21:10:02.825: INFO: Got endpoints: latency-svc-8txc9 [968.24155ms]
Sep  6 21:10:02.852: INFO: Created: latency-svc-bt8zf
Sep  6 21:10:02.867: INFO: Got endpoints: latency-svc-bt8zf [980.579714ms]
Sep  6 21:10:02.902: INFO: Created: latency-svc-l2b5c
Sep  6 21:10:02.985: INFO: Got endpoints: latency-svc-l2b5c [1.00776977s]
Sep  6 21:10:02.997: INFO: Created: latency-svc-ssxf9
Sep  6 21:10:03.018: INFO: Got endpoints: latency-svc-ssxf9 [975.468185ms]
Sep  6 21:10:03.069: INFO: Created: latency-svc-7jmcz
Sep  6 21:10:03.139: INFO: Got endpoints: latency-svc-7jmcz [989.056769ms]
Sep  6 21:10:03.143: INFO: Created: latency-svc-slt7j
Sep  6 21:10:03.150: INFO: Got endpoints: latency-svc-slt7j [956.82958ms]
Sep  6 21:10:03.178: INFO: Created: latency-svc-vmlf8
Sep  6 21:10:03.186: INFO: Got endpoints: latency-svc-vmlf8 [860.075407ms]
Sep  6 21:10:03.208: INFO: Created: latency-svc-vs675
Sep  6 21:10:03.230: INFO: Got endpoints: latency-svc-vs675 [892.024989ms]
Sep  6 21:10:03.291: INFO: Created: latency-svc-q86lk
Sep  6 21:10:03.295: INFO: Got endpoints: latency-svc-q86lk [896.12595ms]
Sep  6 21:10:03.320: INFO: Created: latency-svc-s4r5f
Sep  6 21:10:03.340: INFO: Got endpoints: latency-svc-s4r5f [841.092973ms]
Sep  6 21:10:03.363: INFO: Created: latency-svc-nhqtq
Sep  6 21:10:03.380: INFO: Got endpoints: latency-svc-nhqtq [873.517913ms]
Sep  6 21:10:03.446: INFO: Created: latency-svc-m7z6l
Sep  6 21:10:03.494: INFO: Got endpoints: latency-svc-m7z6l [905.474194ms]
Sep  6 21:10:03.537: INFO: Created: latency-svc-gzr6w
Sep  6 21:10:03.620: INFO: Got endpoints: latency-svc-gzr6w [968.926322ms]
Sep  6 21:10:03.621: INFO: Created: latency-svc-vdt75
Sep  6 21:10:03.632: INFO: Got endpoints: latency-svc-vdt75 [932.141694ms]
Sep  6 21:10:03.658: INFO: Created: latency-svc-hdbql
Sep  6 21:10:03.669: INFO: Got endpoints: latency-svc-hdbql [851.953264ms]
Sep  6 21:10:03.694: INFO: Created: latency-svc-sfq5x
Sep  6 21:10:03.711: INFO: Got endpoints: latency-svc-sfq5x [885.548203ms]
Sep  6 21:10:03.793: INFO: Created: latency-svc-97pc6
Sep  6 21:10:03.797: INFO: Got endpoints: latency-svc-97pc6 [930.027855ms]
Sep  6 21:10:03.874: INFO: Created: latency-svc-882pj
Sep  6 21:10:03.985: INFO: Got endpoints: latency-svc-882pj [1.000351881s]
Sep  6 21:10:03.988: INFO: Created: latency-svc-tpvl6
Sep  6 21:10:03.999: INFO: Got endpoints: latency-svc-tpvl6 [980.758743ms]
Sep  6 21:10:04.023: INFO: Created: latency-svc-f8jq4
Sep  6 21:10:04.041: INFO: Got endpoints: latency-svc-f8jq4 [901.810366ms]
Sep  6 21:10:04.064: INFO: Created: latency-svc-2hwvm
Sep  6 21:10:04.077: INFO: Got endpoints: latency-svc-2hwvm [927.492827ms]
Sep  6 21:10:04.146: INFO: Created: latency-svc-f24sl
Sep  6 21:10:04.174: INFO: Got endpoints: latency-svc-f24sl [987.199025ms]
Sep  6 21:10:04.174: INFO: Created: latency-svc-pd2k8
Sep  6 21:10:04.204: INFO: Got endpoints: latency-svc-pd2k8 [973.39684ms]
Sep  6 21:10:04.327: INFO: Created: latency-svc-54lwc
Sep  6 21:10:04.330: INFO: Got endpoints: latency-svc-54lwc [1.035820972s]
Sep  6 21:10:04.370: INFO: Created: latency-svc-g5dsx
Sep  6 21:10:04.378: INFO: Got endpoints: latency-svc-g5dsx [1.037460919s]
Sep  6 21:10:04.408: INFO: Created: latency-svc-76qrj
Sep  6 21:10:04.420: INFO: Got endpoints: latency-svc-76qrj [1.040270202s]
Sep  6 21:10:04.482: INFO: Created: latency-svc-fdtcf
Sep  6 21:10:04.486: INFO: Got endpoints: latency-svc-fdtcf [991.85292ms]
Sep  6 21:10:04.520: INFO: Created: latency-svc-cdrr6
Sep  6 21:10:04.535: INFO: Got endpoints: latency-svc-cdrr6 [915.666558ms]
Sep  6 21:10:04.562: INFO: Created: latency-svc-8rh8v
Sep  6 21:10:04.577: INFO: Got endpoints: latency-svc-8rh8v [945.291856ms]
Sep  6 21:10:04.626: INFO: Created: latency-svc-9rdlj
Sep  6 21:10:04.631: INFO: Got endpoints: latency-svc-9rdlj [962.413619ms]
Sep  6 21:10:04.653: INFO: Created: latency-svc-fc628
Sep  6 21:10:04.668: INFO: Got endpoints: latency-svc-fc628 [957.168674ms]
Sep  6 21:10:04.688: INFO: Created: latency-svc-g49q4
Sep  6 21:10:04.704: INFO: Got endpoints: latency-svc-g49q4 [906.384775ms]
Sep  6 21:10:04.724: INFO: Created: latency-svc-tn52n
Sep  6 21:10:04.786: INFO: Got endpoints: latency-svc-tn52n [801.032451ms]
Sep  6 21:10:04.789: INFO: Created: latency-svc-9xs8l
Sep  6 21:10:04.809: INFO: Got endpoints: latency-svc-9xs8l [809.981314ms]
Sep  6 21:10:04.840: INFO: Created: latency-svc-5s54t
Sep  6 21:10:04.867: INFO: Got endpoints: latency-svc-5s54t [825.62763ms]
Sep  6 21:10:04.949: INFO: Created: latency-svc-sqpsx
Sep  6 21:10:04.953: INFO: Got endpoints: latency-svc-sqpsx [874.948281ms]
Sep  6 21:10:04.994: INFO: Created: latency-svc-4knqj
Sep  6 21:10:05.011: INFO: Got endpoints: latency-svc-4knqj [837.276729ms]
Sep  6 21:10:05.044: INFO: Created: latency-svc-tpg4l
Sep  6 21:10:05.110: INFO: Got endpoints: latency-svc-tpg4l [906.191543ms]
Sep  6 21:10:05.112: INFO: Created: latency-svc-ptwpk
Sep  6 21:10:05.125: INFO: Got endpoints: latency-svc-ptwpk [794.389829ms]
Sep  6 21:10:05.163: INFO: Created: latency-svc-2hlt5
Sep  6 21:10:05.186: INFO: Got endpoints: latency-svc-2hlt5 [808.160503ms]
Sep  6 21:10:05.267: INFO: Created: latency-svc-4t75z
Sep  6 21:10:05.269: INFO: Got endpoints: latency-svc-4t75z [848.887524ms]
Sep  6 21:10:05.308: INFO: Created: latency-svc-gcc7q
Sep  6 21:10:05.336: INFO: Got endpoints: latency-svc-gcc7q [849.872617ms]
Sep  6 21:10:05.464: INFO: Created: latency-svc-jpkgv
Sep  6 21:10:05.467: INFO: Got endpoints: latency-svc-jpkgv [932.009462ms]
Sep  6 21:10:05.558: INFO: Created: latency-svc-hp9vh
Sep  6 21:10:05.613: INFO: Got endpoints: latency-svc-hp9vh [1.0364776s]
Sep  6 21:10:05.655: INFO: Created: latency-svc-np7h8
Sep  6 21:10:05.684: INFO: Got endpoints: latency-svc-np7h8 [1.053322853s]
Sep  6 21:10:05.708: INFO: Created: latency-svc-ch7zb
Sep  6 21:10:05.769: INFO: Got endpoints: latency-svc-ch7zb [1.100669901s]
Sep  6 21:10:05.799: INFO: Created: latency-svc-m59qp
Sep  6 21:10:05.829: INFO: Got endpoints: latency-svc-m59qp [1.124873373s]
Sep  6 21:10:05.853: INFO: Created: latency-svc-g925p
Sep  6 21:10:05.865: INFO: Got endpoints: latency-svc-g925p [1.078343989s]
Sep  6 21:10:05.913: INFO: Created: latency-svc-kgfzm
Sep  6 21:10:05.953: INFO: Got endpoints: latency-svc-kgfzm [1.1442799s]
Sep  6 21:10:05.996: INFO: Created: latency-svc-jnfx4
Sep  6 21:10:06.009: INFO: Got endpoints: latency-svc-jnfx4 [1.141762767s]
Sep  6 21:10:06.062: INFO: Created: latency-svc-lv64q
Sep  6 21:10:06.069: INFO: Got endpoints: latency-svc-lv64q [1.116535977s]
Sep  6 21:10:06.093: INFO: Created: latency-svc-bsgg9
Sep  6 21:10:06.118: INFO: Got endpoints: latency-svc-bsgg9 [1.106809145s]
Sep  6 21:10:06.141: INFO: Created: latency-svc-pf767
Sep  6 21:10:06.154: INFO: Got endpoints: latency-svc-pf767 [1.044030764s]
Sep  6 21:10:06.243: INFO: Created: latency-svc-zjwb6
Sep  6 21:10:06.249: INFO: Got endpoints: latency-svc-zjwb6 [1.123603216s]
Sep  6 21:10:06.327: INFO: Created: latency-svc-nlr6v
Sep  6 21:10:06.341: INFO: Got endpoints: latency-svc-nlr6v [1.154645331s]
Sep  6 21:10:06.398: INFO: Created: latency-svc-tfsv6
Sep  6 21:10:06.406: INFO: Got endpoints: latency-svc-tfsv6 [1.136845121s]
Sep  6 21:10:06.451: INFO: Created: latency-svc-fs7g9
Sep  6 21:10:06.485: INFO: Got endpoints: latency-svc-fs7g9 [1.148761735s]
Sep  6 21:10:06.596: INFO: Created: latency-svc-69764
Sep  6 21:10:06.599: INFO: Got endpoints: latency-svc-69764 [1.131057041s]
Sep  6 21:10:06.633: INFO: Created: latency-svc-x4dvv
Sep  6 21:10:06.641: INFO: Got endpoints: latency-svc-x4dvv [1.027417287s]
Sep  6 21:10:06.662: INFO: Created: latency-svc-xwwx8
Sep  6 21:10:06.685: INFO: Got endpoints: latency-svc-xwwx8 [1.00027756s]
Sep  6 21:10:06.739: INFO: Created: latency-svc-dkdwz
Sep  6 21:10:06.749: INFO: Got endpoints: latency-svc-dkdwz [980.57288ms]
Sep  6 21:10:06.793: INFO: Created: latency-svc-8gr4c
Sep  6 21:10:06.810: INFO: Got endpoints: latency-svc-8gr4c [981.025293ms]
Sep  6 21:10:06.889: INFO: Created: latency-svc-lzn4f
Sep  6 21:10:06.894: INFO: Got endpoints: latency-svc-lzn4f [144.414027ms]
Sep  6 21:10:06.937: INFO: Created: latency-svc-zmkwf
Sep  6 21:10:06.954: INFO: Got endpoints: latency-svc-zmkwf [1.089614336s]
Sep  6 21:10:06.973: INFO: Created: latency-svc-ltr8g
Sep  6 21:10:07.056: INFO: Got endpoints: latency-svc-ltr8g [1.102442755s]
Sep  6 21:10:07.058: INFO: Created: latency-svc-zqmvl
Sep  6 21:10:07.088: INFO: Got endpoints: latency-svc-zqmvl [1.079519036s]
Sep  6 21:10:07.130: INFO: Created: latency-svc-59df7
Sep  6 21:10:07.140: INFO: Got endpoints: latency-svc-59df7 [1.071173854s]
Sep  6 21:10:07.237: INFO: Created: latency-svc-hw45h
Sep  6 21:10:07.240: INFO: Got endpoints: latency-svc-hw45h [1.121900967s]
Sep  6 21:10:07.273: INFO: Created: latency-svc-kcphr
Sep  6 21:10:07.291: INFO: Got endpoints: latency-svc-kcphr [1.136980494s]
Sep  6 21:10:07.317: INFO: Created: latency-svc-llms5
Sep  6 21:10:07.334: INFO: Got endpoints: latency-svc-llms5 [1.084977658s]
Sep  6 21:10:07.387: INFO: Created: latency-svc-qzjjq
Sep  6 21:10:07.407: INFO: Got endpoints: latency-svc-qzjjq [1.065918481s]
Sep  6 21:10:07.436: INFO: Created: latency-svc-qgtc8
Sep  6 21:10:07.454: INFO: Got endpoints: latency-svc-qgtc8 [1.047383095s]
Sep  6 21:10:07.477: INFO: Created: latency-svc-9fq8j
Sep  6 21:10:07.560: INFO: Got endpoints: latency-svc-9fq8j [1.074703894s]
Sep  6 21:10:07.563: INFO: Created: latency-svc-nzz9b
Sep  6 21:10:07.574: INFO: Got endpoints: latency-svc-nzz9b [975.184821ms]
Sep  6 21:10:07.599: INFO: Created: latency-svc-dwsqs
Sep  6 21:10:07.617: INFO: Got endpoints: latency-svc-dwsqs [975.551788ms]
Sep  6 21:10:07.709: INFO: Created: latency-svc-pzdcg
Sep  6 21:10:07.719: INFO: Got endpoints: latency-svc-pzdcg [1.033909084s]
Sep  6 21:10:07.741: INFO: Created: latency-svc-ckwpb
Sep  6 21:10:07.755: INFO: Got endpoints: latency-svc-ckwpb [944.865853ms]
Sep  6 21:10:07.790: INFO: Created: latency-svc-ktx8w
Sep  6 21:10:07.865: INFO: Got endpoints: latency-svc-ktx8w [971.227948ms]
Sep  6 21:10:07.868: INFO: Created: latency-svc-48hbk
Sep  6 21:10:07.876: INFO: Got endpoints: latency-svc-48hbk [921.398256ms]
Sep  6 21:10:07.903: INFO: Created: latency-svc-mbmbf
Sep  6 21:10:07.918: INFO: Got endpoints: latency-svc-mbmbf [861.975888ms]
Sep  6 21:10:07.945: INFO: Created: latency-svc-hvq2q
Sep  6 21:10:08.020: INFO: Got endpoints: latency-svc-hvq2q [931.996836ms]
Sep  6 21:10:08.058: INFO: Created: latency-svc-stlbj
Sep  6 21:10:08.068: INFO: Got endpoints: latency-svc-stlbj [927.908993ms]
Sep  6 21:10:08.091: INFO: Created: latency-svc-kd7fz
Sep  6 21:10:08.105: INFO: Got endpoints: latency-svc-kd7fz [864.576348ms]
Sep  6 21:10:08.183: INFO: Created: latency-svc-997w2
Sep  6 21:10:08.187: INFO: Got endpoints: latency-svc-997w2 [895.748642ms]
Sep  6 21:10:08.239: INFO: Created: latency-svc-5fckx
Sep  6 21:10:08.249: INFO: Got endpoints: latency-svc-5fckx [915.170821ms]
Sep  6 21:10:08.349: INFO: Created: latency-svc-g6wmm
Sep  6 21:10:08.353: INFO: Got endpoints: latency-svc-g6wmm [946.424937ms]
Sep  6 21:10:08.390: INFO: Created: latency-svc-8z2sj
Sep  6 21:10:08.406: INFO: Got endpoints: latency-svc-8z2sj [951.624507ms]
Sep  6 21:10:08.443: INFO: Created: latency-svc-2gfr9
Sep  6 21:10:08.500: INFO: Got endpoints: latency-svc-2gfr9 [939.828832ms]
Sep  6 21:10:08.501: INFO: Created: latency-svc-gwb8j
Sep  6 21:10:08.508: INFO: Got endpoints: latency-svc-gwb8j [933.823172ms]
Sep  6 21:10:08.558: INFO: Created: latency-svc-7j2kq
Sep  6 21:10:08.575: INFO: Got endpoints: latency-svc-7j2kq [957.892743ms]
Sep  6 21:10:08.661: INFO: Created: latency-svc-2phnj
Sep  6 21:10:08.694: INFO: Got endpoints: latency-svc-2phnj [975.564872ms]
Sep  6 21:10:08.695: INFO: Created: latency-svc-wmcd9
Sep  6 21:10:08.725: INFO: Got endpoints: latency-svc-wmcd9 [969.803758ms]
Sep  6 21:10:08.755: INFO: Created: latency-svc-w4g7v
Sep  6 21:10:08.817: INFO: Got endpoints: latency-svc-w4g7v [952.04707ms]
Sep  6 21:10:08.820: INFO: Created: latency-svc-2cdjg
Sep  6 21:10:08.833: INFO: Got endpoints: latency-svc-2cdjg [956.973843ms]
Sep  6 21:10:08.852: INFO: Created: latency-svc-2qh8n
Sep  6 21:10:08.869: INFO: Got endpoints: latency-svc-2qh8n [951.211596ms]
Sep  6 21:10:08.894: INFO: Created: latency-svc-r6qsj
Sep  6 21:10:08.912: INFO: Got endpoints: latency-svc-r6qsj [891.187403ms]
Sep  6 21:10:08.972: INFO: Created: latency-svc-vcprn
Sep  6 21:10:08.976: INFO: Got endpoints: latency-svc-vcprn [907.635922ms]
Sep  6 21:10:09.037: INFO: Created: latency-svc-8lfdf
Sep  6 21:10:09.128: INFO: Got endpoints: latency-svc-8lfdf [1.023666914s]
Sep  6 21:10:09.130: INFO: Created: latency-svc-qqdbx
Sep  6 21:10:09.140: INFO: Got endpoints: latency-svc-qqdbx [952.78648ms]
Sep  6 21:10:09.164: INFO: Created: latency-svc-qjxgh
Sep  6 21:10:09.182: INFO: Got endpoints: latency-svc-qjxgh [933.566324ms]
Sep  6 21:10:09.205: INFO: Created: latency-svc-7csxf
Sep  6 21:10:09.219: INFO: Got endpoints: latency-svc-7csxf [865.590004ms]
Sep  6 21:10:09.308: INFO: Created: latency-svc-7nbjn
Sep  6 21:10:09.311: INFO: Got endpoints: latency-svc-7nbjn [905.220822ms]
Sep  6 21:10:09.350: INFO: Created: latency-svc-mk4lq
Sep  6 21:10:09.379: INFO: Got endpoints: latency-svc-mk4lq [879.542109ms]
Sep  6 21:10:09.478: INFO: Created: latency-svc-rrvlp
Sep  6 21:10:09.480: INFO: Got endpoints: latency-svc-rrvlp [972.613531ms]
Sep  6 21:10:09.510: INFO: Created: latency-svc-vk2sr
Sep  6 21:10:09.519: INFO: Got endpoints: latency-svc-vk2sr [944.382145ms]
Sep  6 21:10:09.564: INFO: Created: latency-svc-kzpzb
Sep  6 21:10:09.655: INFO: Got endpoints: latency-svc-kzpzb [960.08253ms]
Sep  6 21:10:09.657: INFO: Created: latency-svc-ts95x
Sep  6 21:10:09.690: INFO: Created: latency-svc-jcdgg
Sep  6 21:10:09.744: INFO: Got endpoints: latency-svc-ts95x [1.019250356s]
Sep  6 21:10:09.744: INFO: Created: latency-svc-ssd6x
Sep  6 21:10:09.799: INFO: Got endpoints: latency-svc-ssd6x [966.000361ms]
Sep  6 21:10:09.836: INFO: Got endpoints: latency-svc-jcdgg [1.018438421s]
Sep  6 21:10:09.836: INFO: Created: latency-svc-5zzrl
Sep  6 21:10:09.857: INFO: Got endpoints: latency-svc-5zzrl [987.314589ms]
Sep  6 21:10:09.878: INFO: Created: latency-svc-4qc4k
Sep  6 21:10:09.893: INFO: Got endpoints: latency-svc-4qc4k [980.821963ms]
Sep  6 21:10:09.972: INFO: Created: latency-svc-6pvv4
Sep  6 21:10:09.975: INFO: Got endpoints: latency-svc-6pvv4 [999.048445ms]
Sep  6 21:10:10.034: INFO: Created: latency-svc-2hpkq
Sep  6 21:10:10.064: INFO: Got endpoints: latency-svc-2hpkq [935.223107ms]
Sep  6 21:10:10.135: INFO: Created: latency-svc-ljzmh
Sep  6 21:10:10.158: INFO: Got endpoints: latency-svc-ljzmh [1.018068645s]
Sep  6 21:10:10.182: INFO: Created: latency-svc-wmjld
Sep  6 21:10:10.193: INFO: Got endpoints: latency-svc-wmjld [1.010817586s]
Sep  6 21:10:10.218: INFO: Created: latency-svc-hj2jd
Sep  6 21:10:10.229: INFO: Got endpoints: latency-svc-hj2jd [1.010712103s]
Sep  6 21:10:10.278: INFO: Created: latency-svc-8wvnh
Sep  6 21:10:10.284: INFO: Got endpoints: latency-svc-8wvnh [973.37367ms]
Sep  6 21:10:10.311: INFO: Created: latency-svc-qx62n
Sep  6 21:10:10.339: INFO: Got endpoints: latency-svc-qx62n [959.939297ms]
Sep  6 21:10:10.374: INFO: Created: latency-svc-br54g
Sep  6 21:10:10.464: INFO: Got endpoints: latency-svc-br54g [983.351226ms]
Sep  6 21:10:10.466: INFO: Created: latency-svc-vmq5m
Sep  6 21:10:10.476: INFO: Got endpoints: latency-svc-vmq5m [957.063645ms]
Sep  6 21:10:10.502: INFO: Created: latency-svc-rchlx
Sep  6 21:10:10.525: INFO: Got endpoints: latency-svc-rchlx [870.074978ms]
Sep  6 21:10:10.631: INFO: Created: latency-svc-zdxrc
Sep  6 21:10:10.634: INFO: Got endpoints: latency-svc-zdxrc [889.962407ms]
Sep  6 21:10:10.663: INFO: Created: latency-svc-znsk5
Sep  6 21:10:10.675: INFO: Got endpoints: latency-svc-znsk5 [875.834588ms]
Sep  6 21:10:10.700: INFO: Created: latency-svc-pmgss
Sep  6 21:10:10.717: INFO: Got endpoints: latency-svc-pmgss [881.506309ms]
Sep  6 21:10:10.820: INFO: Created: latency-svc-jd8tm
Sep  6 21:10:10.854: INFO: Created: latency-svc-82gbw
Sep  6 21:10:10.854: INFO: Got endpoints: latency-svc-jd8tm [997.47476ms]
Sep  6 21:10:10.868: INFO: Got endpoints: latency-svc-82gbw [975.12933ms]
Sep  6 21:10:10.902: INFO: Created: latency-svc-hjk4w
Sep  6 21:10:10.966: INFO: Got endpoints: latency-svc-hjk4w [990.722938ms]
Sep  6 21:10:10.968: INFO: Created: latency-svc-mwpfh
Sep  6 21:10:10.976: INFO: Got endpoints: latency-svc-mwpfh [912.472369ms]
Sep  6 21:10:11.005: INFO: Created: latency-svc-nbvsm
Sep  6 21:10:11.035: INFO: Got endpoints: latency-svc-nbvsm [877.137047ms]
Sep  6 21:10:11.160: INFO: Created: latency-svc-f29bw
Sep  6 21:10:11.162: INFO: Got endpoints: latency-svc-f29bw [968.839991ms]
Sep  6 21:10:11.209: INFO: Created: latency-svc-8wm4j
Sep  6 21:10:11.223: INFO: Got endpoints: latency-svc-8wm4j [993.200622ms]
Sep  6 21:10:11.251: INFO: Created: latency-svc-rqssl
Sep  6 21:10:11.316: INFO: Got endpoints: latency-svc-rqssl [1.031409464s]
Sep  6 21:10:11.346: INFO: Created: latency-svc-s7nwb
Sep  6 21:10:11.361: INFO: Got endpoints: latency-svc-s7nwb [1.021737108s]
Sep  6 21:10:11.382: INFO: Created: latency-svc-rf7zs
Sep  6 21:10:11.404: INFO: Got endpoints: latency-svc-rf7zs [939.655516ms]
Sep  6 21:10:11.465: INFO: Created: latency-svc-cw9fg
Sep  6 21:10:11.469: INFO: Got endpoints: latency-svc-cw9fg [993.202977ms]
Sep  6 21:10:11.497: INFO: Created: latency-svc-jp45l
Sep  6 21:10:11.512: INFO: Got endpoints: latency-svc-jp45l [987.374817ms]
Sep  6 21:10:11.539: INFO: Created: latency-svc-9lxmv
Sep  6 21:10:11.554: INFO: Got endpoints: latency-svc-9lxmv [920.055819ms]
Sep  6 21:10:11.643: INFO: Created: latency-svc-45xpg
Sep  6 21:10:11.646: INFO: Got endpoints: latency-svc-45xpg [970.945011ms]
Sep  6 21:10:11.683: INFO: Created: latency-svc-9q5r2
Sep  6 21:10:11.713: INFO: Got endpoints: latency-svc-9q5r2 [995.598768ms]
Sep  6 21:10:11.824: INFO: Created: latency-svc-nw6v6
Sep  6 21:10:11.827: INFO: Got endpoints: latency-svc-nw6v6 [972.66799ms]
Sep  6 21:10:11.862: INFO: Created: latency-svc-rdp7n
Sep  6 21:10:11.873: INFO: Got endpoints: latency-svc-rdp7n [1.004795357s]
Sep  6 21:10:11.898: INFO: Created: latency-svc-6m4j6
Sep  6 21:10:12.002: INFO: Got endpoints: latency-svc-6m4j6 [1.036220825s]
Sep  6 21:10:12.004: INFO: Created: latency-svc-9r2z7
Sep  6 21:10:12.011: INFO: Got endpoints: latency-svc-9r2z7 [1.034918127s]
Sep  6 21:10:12.042: INFO: Created: latency-svc-nlc2j
Sep  6 21:10:12.059: INFO: Got endpoints: latency-svc-nlc2j [1.024431236s]
Sep  6 21:10:12.084: INFO: Created: latency-svc-5vsgr
Sep  6 21:10:12.158: INFO: Got endpoints: latency-svc-5vsgr [995.835227ms]
Sep  6 21:10:12.161: INFO: Created: latency-svc-kdm8t
Sep  6 21:10:12.168: INFO: Got endpoints: latency-svc-kdm8t [945.033349ms]
Sep  6 21:10:12.193: INFO: Created: latency-svc-hk4m4
Sep  6 21:10:12.210: INFO: Got endpoints: latency-svc-hk4m4 [894.352663ms]
Sep  6 21:10:12.247: INFO: Created: latency-svc-gf56p
Sep  6 21:10:12.321: INFO: Got endpoints: latency-svc-gf56p [959.552037ms]
Sep  6 21:10:12.324: INFO: Created: latency-svc-ctlcw
Sep  6 21:10:12.329: INFO: Got endpoints: latency-svc-ctlcw [925.406166ms]
Sep  6 21:10:12.354: INFO: Created: latency-svc-25z42
Sep  6 21:10:12.383: INFO: Got endpoints: latency-svc-25z42 [913.728906ms]
Sep  6 21:10:12.419: INFO: Created: latency-svc-sdlhp
Sep  6 21:10:12.488: INFO: Got endpoints: latency-svc-sdlhp [975.578197ms]
Sep  6 21:10:12.490: INFO: Created: latency-svc-2r5dg
Sep  6 21:10:12.516: INFO: Got endpoints: latency-svc-2r5dg [961.636887ms]
Sep  6 21:10:12.545: INFO: Created: latency-svc-snlqk
Sep  6 21:10:12.564: INFO: Got endpoints: latency-svc-snlqk [917.941486ms]
Sep  6 21:10:12.564: INFO: Latencies: [50.498415ms 124.681283ms 144.414027ms 177.000692ms 213.496511ms 254.11155ms 308.706528ms 409.16533ms 435.541088ms 471.425029ms 577.409426ms 627.869566ms 738.278177ms 794.389829ms 794.434873ms 801.032451ms 808.160503ms 809.981314ms 825.62763ms 837.276729ms 841.092973ms 848.887524ms 849.872617ms 851.953264ms 860.075407ms 861.975888ms 864.576348ms 865.590004ms 870.074978ms 873.517913ms 874.948281ms 875.834588ms 876.341187ms 877.137047ms 879.542109ms 880.628781ms 881.506309ms 885.548203ms 888.518204ms 889.962407ms 891.187403ms 892.024989ms 894.352663ms 895.748642ms 896.12595ms 901.810366ms 905.220822ms 905.474194ms 906.191543ms 906.33723ms 906.384775ms 907.635922ms 912.472369ms 913.728906ms 915.170821ms 915.666558ms 917.941486ms 920.055819ms 921.398256ms 925.406166ms 926.552136ms 927.492827ms 927.908993ms 930.027855ms 931.996836ms 932.009462ms 932.141694ms 933.566324ms 933.823172ms 935.223107ms 935.434341ms 935.451807ms 939.655516ms 939.828832ms 942.299005ms 943.470703ms 944.382145ms 944.674695ms 944.678453ms 944.865853ms 945.033349ms 945.291856ms 946.424937ms 951.211596ms 951.624507ms 951.94018ms 952.04707ms 952.78648ms 956.82958ms 956.973843ms 957.063645ms 957.168674ms 957.314103ms 957.892743ms 958.140754ms 958.737281ms 959.552037ms 959.939297ms 960.08253ms 960.607411ms 961.636887ms 962.413619ms 966.000361ms 968.24155ms 968.839991ms 968.926322ms 969.803758ms 970.945011ms 971.227948ms 972.613531ms 972.66799ms 973.37367ms 973.39684ms 975.12933ms 975.184821ms 975.468185ms 975.551788ms 975.564872ms 975.578197ms 980.57288ms 980.579714ms 980.758743ms 980.821963ms 981.025293ms 983.351226ms 986.832842ms 986.874699ms 987.199025ms 987.314589ms 987.374817ms 989.056769ms 990.722938ms 991.85292ms 993.200622ms 993.202977ms 995.598768ms 995.788692ms 995.835227ms 997.47476ms 999.048445ms 1.00027756s 1.000351881s 1.004795357s 1.00776977s 1.010712103s 1.010817586s 1.018068645s 1.018438421s 1.019250356s 1.021737108s 1.023666914s 1.024431236s 1.027037979s 1.027417287s 1.031409464s 1.033909084s 1.034918127s 1.035820972s 1.036220825s 1.0364776s 1.037460919s 1.040270202s 1.044030764s 1.047383095s 1.053322853s 1.059138052s 1.065918481s 1.071173854s 1.07455199s 1.074703894s 1.078343989s 1.079519036s 1.084977658s 1.089614336s 1.094072276s 1.100669901s 1.101391246s 1.102442755s 1.106270541s 1.106809145s 1.11045953s 1.116535977s 1.121900967s 1.123603216s 1.124873373s 1.129928017s 1.131057041s 1.134562343s 1.136845121s 1.136980494s 1.141762767s 1.1442799s 1.148761735s 1.149376986s 1.154645331s 1.162746298s 1.166565338s 1.201235368s 1.22485704s 1.234827616s]
Sep  6 21:10:12.564: INFO: 50 %ile: 961.636887ms
Sep  6 21:10:12.564: INFO: 90 %ile: 1.11045953s
Sep  6 21:10:12.564: INFO: 99 %ile: 1.22485704s
Sep  6 21:10:12.564: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:10:12.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-6p5s2" for this suite.
Sep  6 21:10:38.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:10:38.626: INFO: namespace: e2e-tests-svc-latency-6p5s2, resource: bindings, ignored listing per whitelist
Sep  6 21:10:38.657: INFO: namespace e2e-tests-svc-latency-6p5s2 deletion completed in 26.085646243s

• [SLOW TEST:42.336 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:10:38.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Sep  6 21:10:48.853: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  6 21:10:48.858: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  6 21:10:50.859: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  6 21:10:50.863: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  6 21:10:52.859: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  6 21:10:52.863: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  6 21:10:54.859: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  6 21:10:54.863: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  6 21:10:56.859: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  6 21:10:56.863: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  6 21:10:58.859: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  6 21:10:58.863: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  6 21:11:00.859: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  6 21:11:00.863: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  6 21:11:02.859: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  6 21:11:02.863: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  6 21:11:04.859: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  6 21:11:04.863: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  6 21:11:06.859: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  6 21:11:06.863: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  6 21:11:08.859: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  6 21:11:08.863: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  6 21:11:10.859: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  6 21:11:10.863: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:11:10.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-m7ln9" for this suite.
Sep  6 21:11:32.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:11:32.964: INFO: namespace: e2e-tests-container-lifecycle-hook-m7ln9, resource: bindings, ignored listing per whitelist
Sep  6 21:11:32.973: INFO: namespace e2e-tests-container-lifecycle-hook-m7ln9 deletion completed in 22.10523338s

• [SLOW TEST:54.316 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:11:32.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep  6 21:11:33.083: INFO: Waiting up to 5m0s for pod "pod-8a59b44c-f085-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-bcpz6" to be "success or failure"
Sep  6 21:11:33.087: INFO: Pod "pod-8a59b44c-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.948996ms
Sep  6 21:11:35.091: INFO: Pod "pod-8a59b44c-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008081655s
Sep  6 21:11:37.095: INFO: Pod "pod-8a59b44c-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011935477s
Sep  6 21:11:39.099: INFO: Pod "pod-8a59b44c-f085-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016183953s
STEP: Saw pod success
Sep  6 21:11:39.099: INFO: Pod "pod-8a59b44c-f085-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:11:39.103: INFO: Trying to get logs from node hunter-worker2 pod pod-8a59b44c-f085-11ea-b72c-0242ac110008 container test-container: 
STEP: delete the pod
Sep  6 21:11:39.185: INFO: Waiting for pod pod-8a59b44c-f085-11ea-b72c-0242ac110008 to disappear
Sep  6 21:11:39.195: INFO: Pod pod-8a59b44c-f085-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:11:39.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bcpz6" for this suite.
Sep  6 21:11:45.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:11:45.251: INFO: namespace: e2e-tests-emptydir-bcpz6, resource: bindings, ignored listing per whitelist
Sep  6 21:11:45.323: INFO: namespace e2e-tests-emptydir-bcpz6 deletion completed in 6.123204036s

• [SLOW TEST:12.349 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:11:45.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep  6 21:11:45.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-qrd7q'
Sep  6 21:11:47.830: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep  6 21:11:47.830: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Sep  6 21:11:49.843: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-29f25]
Sep  6 21:11:49.844: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-29f25" in namespace "e2e-tests-kubectl-qrd7q" to be "running and ready"
Sep  6 21:11:49.847: INFO: Pod "e2e-test-nginx-rc-29f25": Phase="Pending", Reason="", readiness=false. Elapsed: 3.439466ms
Sep  6 21:11:51.851: INFO: Pod "e2e-test-nginx-rc-29f25": Phase="Running", Reason="", readiness=true. Elapsed: 2.007416719s
Sep  6 21:11:51.851: INFO: Pod "e2e-test-nginx-rc-29f25" satisfied condition "running and ready"
Sep  6 21:11:51.851: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-29f25]
Sep  6 21:11:51.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qrd7q'
Sep  6 21:11:51.982: INFO: stderr: ""
Sep  6 21:11:51.982: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Sep  6 21:11:51.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qrd7q'
Sep  6 21:11:52.086: INFO: stderr: ""
Sep  6 21:11:52.086: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:11:52.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qrd7q" for this suite.
Sep  6 21:12:14.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:12:14.151: INFO: namespace: e2e-tests-kubectl-qrd7q, resource: bindings, ignored listing per whitelist
Sep  6 21:12:14.211: INFO: namespace e2e-tests-kubectl-qrd7q deletion completed in 22.114642544s

• [SLOW TEST:28.888 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:12:14.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-gqjxl
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Sep  6 21:12:14.287: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Sep  6 21:12:34.403: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.95:8080/dial?request=hostName&protocol=http&host=10.244.2.104&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-gqjxl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  6 21:12:34.403: INFO: >>> kubeConfig: /root/.kube/config
I0906 21:12:34.432161       7 log.go:172] (0xc0019e22c0) (0xc00098e460) Create stream
I0906 21:12:34.432196       7 log.go:172] (0xc0019e22c0) (0xc00098e460) Stream added, broadcasting: 1
I0906 21:12:34.434479       7 log.go:172] (0xc0019e22c0) Reply frame received for 1
I0906 21:12:34.434532       7 log.go:172] (0xc0019e22c0) (0xc00098e500) Create stream
I0906 21:12:34.434548       7 log.go:172] (0xc0019e22c0) (0xc00098e500) Stream added, broadcasting: 3
I0906 21:12:34.435467       7 log.go:172] (0xc0019e22c0) Reply frame received for 3
I0906 21:12:34.435502       7 log.go:172] (0xc0019e22c0) (0xc00104b7c0) Create stream
I0906 21:12:34.435514       7 log.go:172] (0xc0019e22c0) (0xc00104b7c0) Stream added, broadcasting: 5
I0906 21:12:34.436502       7 log.go:172] (0xc0019e22c0) Reply frame received for 5
I0906 21:12:34.531649       7 log.go:172] (0xc0019e22c0) Data frame received for 3
I0906 21:12:34.531692       7 log.go:172] (0xc00098e500) (3) Data frame handling
I0906 21:12:34.531713       7 log.go:172] (0xc00098e500) (3) Data frame sent
I0906 21:12:34.532494       7 log.go:172] (0xc0019e22c0) Data frame received for 3
I0906 21:12:34.532529       7 log.go:172] (0xc0019e22c0) Data frame received for 5
I0906 21:12:34.532560       7 log.go:172] (0xc00104b7c0) (5) Data frame handling
I0906 21:12:34.532589       7 log.go:172] (0xc00098e500) (3) Data frame handling
I0906 21:12:34.534438       7 log.go:172] (0xc0019e22c0) Data frame received for 1
I0906 21:12:34.534480       7 log.go:172] (0xc00098e460) (1) Data frame handling
I0906 21:12:34.534509       7 log.go:172] (0xc00098e460) (1) Data frame sent
I0906 21:12:34.534539       7 log.go:172] (0xc0019e22c0) (0xc00098e460) Stream removed, broadcasting: 1
I0906 21:12:34.534563       7 log.go:172] (0xc0019e22c0) Go away received
I0906 21:12:34.534689       7 log.go:172] (0xc0019e22c0) (0xc00098e460) Stream removed, broadcasting: 1
I0906 21:12:34.534724       7 log.go:172] (0xc0019e22c0) (0xc00098e500) Stream removed, broadcasting: 3
I0906 21:12:34.534737       7 log.go:172] (0xc0019e22c0) (0xc00104b7c0) Stream removed, broadcasting: 5
Sep  6 21:12:34.534: INFO: Waiting for endpoints: map[]
Sep  6 21:12:34.538: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.95:8080/dial?request=hostName&protocol=http&host=10.244.1.94&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-gqjxl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  6 21:12:34.538: INFO: >>> kubeConfig: /root/.kube/config
I0906 21:12:34.572174       7 log.go:172] (0xc00045fad0) (0xc00104b9a0) Create stream
I0906 21:12:34.572202       7 log.go:172] (0xc00045fad0) (0xc00104b9a0) Stream added, broadcasting: 1
I0906 21:12:34.574165       7 log.go:172] (0xc00045fad0) Reply frame received for 1
I0906 21:12:34.574218       7 log.go:172] (0xc00045fad0) (0xc0013f8140) Create stream
I0906 21:12:34.574238       7 log.go:172] (0xc00045fad0) (0xc0013f8140) Stream added, broadcasting: 3
I0906 21:12:34.575231       7 log.go:172] (0xc00045fad0) Reply frame received for 3
I0906 21:12:34.575290       7 log.go:172] (0xc00045fad0) (0xc00104ba40) Create stream
I0906 21:12:34.575317       7 log.go:172] (0xc00045fad0) (0xc00104ba40) Stream added, broadcasting: 5
I0906 21:12:34.576447       7 log.go:172] (0xc00045fad0) Reply frame received for 5
I0906 21:12:34.637333       7 log.go:172] (0xc00045fad0) Data frame received for 3
I0906 21:12:34.637365       7 log.go:172] (0xc0013f8140) (3) Data frame handling
I0906 21:12:34.637388       7 log.go:172] (0xc0013f8140) (3) Data frame sent
I0906 21:12:34.638176       7 log.go:172] (0xc00045fad0) Data frame received for 3
I0906 21:12:34.638248       7 log.go:172] (0xc0013f8140) (3) Data frame handling
I0906 21:12:34.638331       7 log.go:172] (0xc00045fad0) Data frame received for 5
I0906 21:12:34.638372       7 log.go:172] (0xc00104ba40) (5) Data frame handling
I0906 21:12:34.639928       7 log.go:172] (0xc00045fad0) Data frame received for 1
I0906 21:12:34.640079       7 log.go:172] (0xc00104b9a0) (1) Data frame handling
I0906 21:12:34.640170       7 log.go:172] (0xc00104b9a0) (1) Data frame sent
I0906 21:12:34.640209       7 log.go:172] (0xc00045fad0) (0xc00104b9a0) Stream removed, broadcasting: 1
I0906 21:12:34.640318       7 log.go:172] (0xc00045fad0) Go away received
I0906 21:12:34.640345       7 log.go:172] (0xc00045fad0) (0xc00104b9a0) Stream removed, broadcasting: 1
I0906 21:12:34.640369       7 log.go:172] (0xc00045fad0) (0xc0013f8140) Stream removed, broadcasting: 3
I0906 21:12:34.640392       7 log.go:172] (0xc00045fad0) (0xc00104ba40) Stream removed, broadcasting: 5
Sep  6 21:12:34.640: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:12:34.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-gqjxl" for this suite.
Sep  6 21:12:56.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:12:56.733: INFO: namespace: e2e-tests-pod-network-test-gqjxl, resource: bindings, ignored listing per whitelist
Sep  6 21:12:56.774: INFO: namespace e2e-tests-pod-network-test-gqjxl deletion completed in 22.130016908s

• [SLOW TEST:42.563 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:12:56.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:12:56.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-n9qhl" for this suite.
Sep  6 21:13:02.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:13:02.994: INFO: namespace: e2e-tests-kubelet-test-n9qhl, resource: bindings, ignored listing per whitelist
Sep  6 21:13:03.063: INFO: namespace e2e-tests-kubelet-test-n9qhl deletion completed in 6.092747958s

• [SLOW TEST:6.288 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:13:03.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:13:33.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-5fnsp" for this suite.
Sep  6 21:13:39.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:13:39.054: INFO: namespace: e2e-tests-container-runtime-5fnsp, resource: bindings, ignored listing per whitelist
Sep  6 21:13:39.124: INFO: namespace e2e-tests-container-runtime-5fnsp deletion completed in 6.09156833s

• [SLOW TEST:36.061 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:13:39.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-d58f5a34-f085-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume secrets
Sep  6 21:13:39.270: INFO: Waiting up to 5m0s for pod "pod-secrets-d5919af5-f085-11ea-b72c-0242ac110008" in namespace "e2e-tests-secrets-8svr5" to be "success or failure"
Sep  6 21:13:39.274: INFO: Pod "pod-secrets-d5919af5-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.89812ms
Sep  6 21:13:41.277: INFO: Pod "pod-secrets-d5919af5-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006820699s
Sep  6 21:13:43.281: INFO: Pod "pod-secrets-d5919af5-f085-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010556835s
STEP: Saw pod success
Sep  6 21:13:43.281: INFO: Pod "pod-secrets-d5919af5-f085-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:13:43.283: INFO: Trying to get logs from node hunter-worker pod pod-secrets-d5919af5-f085-11ea-b72c-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Sep  6 21:13:43.360: INFO: Waiting for pod pod-secrets-d5919af5-f085-11ea-b72c-0242ac110008 to disappear
Sep  6 21:13:43.370: INFO: Pod pod-secrets-d5919af5-f085-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:13:43.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8svr5" for this suite.
Sep  6 21:13:49.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:13:49.424: INFO: namespace: e2e-tests-secrets-8svr5, resource: bindings, ignored listing per whitelist
Sep  6 21:13:49.456: INFO: namespace e2e-tests-secrets-8svr5 deletion completed in 6.082696135s

• [SLOW TEST:10.332 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:13:49.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-zf2hv in namespace e2e-tests-proxy-28fj6
I0906 21:13:49.580408       7 runners.go:184] Created replication controller with name: proxy-service-zf2hv, namespace: e2e-tests-proxy-28fj6, replica count: 1
I0906 21:13:50.630902       7 runners.go:184] proxy-service-zf2hv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 21:13:51.631113       7 runners.go:184] proxy-service-zf2hv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 21:13:52.631346       7 runners.go:184] proxy-service-zf2hv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 21:13:53.631573       7 runners.go:184] proxy-service-zf2hv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 21:13:54.631844       7 runners.go:184] proxy-service-zf2hv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 21:13:55.632170       7 runners.go:184] proxy-service-zf2hv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0906 21:13:56.632379       7 runners.go:184] proxy-service-zf2hv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0906 21:13:57.632587       7 runners.go:184] proxy-service-zf2hv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0906 21:13:58.632791       7 runners.go:184] proxy-service-zf2hv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0906 21:13:59.632985       7 runners.go:184] proxy-service-zf2hv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep  6 21:13:59.648: INFO: Endpoint e2e-tests-proxy-28fj6/proxy-service-zf2hv is not ready yet
Sep  6 21:14:01.651: INFO: setup took 12.116730833s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Sep  6 21:14:01.659: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-28fj6/pods/http:proxy-service-zf2hv-bgxz4:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-e849614a-f085-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume configMaps
Sep  6 21:14:10.690: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e84a0c2b-f085-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-6shnk" to be "success or failure"
Sep  6 21:14:10.706: INFO: Pod "pod-projected-configmaps-e84a0c2b-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.801349ms
Sep  6 21:14:12.710: INFO: Pod "pod-projected-configmaps-e84a0c2b-f085-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01980652s
Sep  6 21:14:14.715: INFO: Pod "pod-projected-configmaps-e84a0c2b-f085-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024276228s
STEP: Saw pod success
Sep  6 21:14:14.715: INFO: Pod "pod-projected-configmaps-e84a0c2b-f085-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:14:14.718: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-e84a0c2b-f085-11ea-b72c-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  6 21:14:14.755: INFO: Waiting for pod pod-projected-configmaps-e84a0c2b-f085-11ea-b72c-0242ac110008 to disappear
Sep  6 21:14:14.778: INFO: Pod pod-projected-configmaps-e84a0c2b-f085-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:14:14.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6shnk" for this suite.
Sep  6 21:14:20.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:14:20.880: INFO: namespace: e2e-tests-projected-6shnk, resource: bindings, ignored listing per whitelist
Sep  6 21:14:20.885: INFO: namespace e2e-tests-projected-6shnk deletion completed in 6.103132394s

• [SLOW TEST:10.335 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:14:20.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0906 21:14:22.097026       7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep  6 21:14:22.097: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:14:22.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-bkhc9" for this suite.
Sep  6 21:14:28.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:14:28.167: INFO: namespace: e2e-tests-gc-bkhc9, resource: bindings, ignored listing per whitelist
Sep  6 21:14:28.225: INFO: namespace e2e-tests-gc-bkhc9 deletion completed in 6.125858479s

• [SLOW TEST:7.340 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:14:28.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Sep  6 21:14:32.470: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:14:56.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-zdbnp" for this suite.
Sep  6 21:15:02.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:15:02.686: INFO: namespace: e2e-tests-namespaces-zdbnp, resource: bindings, ignored listing per whitelist
Sep  6 21:15:02.714: INFO: namespace e2e-tests-namespaces-zdbnp deletion completed in 6.09082863s
STEP: Destroying namespace "e2e-tests-nsdeletetest-2wt9k" for this suite.
Sep  6 21:15:02.716: INFO: Namespace e2e-tests-nsdeletetest-2wt9k was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-b7x2t" for this suite.
Sep  6 21:15:08.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:15:08.773: INFO: namespace: e2e-tests-nsdeletetest-b7x2t, resource: bindings, ignored listing per whitelist
Sep  6 21:15:08.835: INFO: namespace e2e-tests-nsdeletetest-b7x2t deletion completed in 6.118407028s

• [SLOW TEST:40.609 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:15:08.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  6 21:15:08.971: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b09206b-f086-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-fjm2c" to be "success or failure"
Sep  6 21:15:08.977: INFO: Pod "downwardapi-volume-0b09206b-f086-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.259684ms
Sep  6 21:15:10.981: INFO: Pod "downwardapi-volume-0b09206b-f086-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010060666s
Sep  6 21:15:13.025: INFO: Pod "downwardapi-volume-0b09206b-f086-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054161629s
STEP: Saw pod success
Sep  6 21:15:13.025: INFO: Pod "downwardapi-volume-0b09206b-f086-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:15:13.028: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0b09206b-f086-11ea-b72c-0242ac110008 container client-container: 
STEP: delete the pod
Sep  6 21:15:13.056: INFO: Waiting for pod downwardapi-volume-0b09206b-f086-11ea-b72c-0242ac110008 to disappear
Sep  6 21:15:13.066: INFO: Pod downwardapi-volume-0b09206b-f086-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:15:13.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fjm2c" for this suite.
Sep  6 21:15:19.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:15:19.170: INFO: namespace: e2e-tests-downward-api-fjm2c, resource: bindings, ignored listing per whitelist
Sep  6 21:15:19.176: INFO: namespace e2e-tests-downward-api-fjm2c deletion completed in 6.106980753s

• [SLOW TEST:10.341 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:15:19.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-h6zrb
Sep  6 21:15:23.341: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-h6zrb
STEP: checking the pod's current state and verifying that restartCount is present
Sep  6 21:15:23.344: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:19:23.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-h6zrb" for this suite.
Sep  6 21:19:29.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:19:29.970: INFO: namespace: e2e-tests-container-probe-h6zrb, resource: bindings, ignored listing per whitelist
Sep  6 21:19:29.986: INFO: namespace e2e-tests-container-probe-h6zrb deletion completed in 6.098031975s

• [SLOW TEST:250.809 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:19:29.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Sep  6 21:19:30.135: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep  6 21:19:30.142: INFO: Waiting for terminating namespaces to be deleted...
Sep  6 21:19:30.145: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Sep  6 21:19:30.153: INFO: kindnet-4qkqp from kube-system started at 2020-09-05 13:37:22 +0000 UTC (1 container statuses recorded)
Sep  6 21:19:30.153: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep  6 21:19:30.153: INFO: kube-proxy-t9g4m from kube-system started at 2020-09-05 13:37:20 +0000 UTC (1 container statuses recorded)
Sep  6 21:19:30.153: INFO: 	Container kube-proxy ready: true, restart count 0
Sep  6 21:19:30.153: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Sep  6 21:19:30.158: INFO: kindnet-z7tw7 from kube-system started at 2020-09-05 13:37:22 +0000 UTC (1 container statuses recorded)
Sep  6 21:19:30.158: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep  6 21:19:30.158: INFO: kube-proxy-vl5mq from kube-system started at 2020-09-05 13:37:20 +0000 UTC (1 container statuses recorded)
Sep  6 21:19:30.158: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-a91ed3c5-f086-11ea-b72c-0242ac110008 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-a91ed3c5-f086-11ea-b72c-0242ac110008 off the node hunter-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-a91ed3c5-f086-11ea-b72c-0242ac110008
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:19:38.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-84p5c" for this suite.
Sep  6 21:19:52.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:19:52.477: INFO: namespace: e2e-tests-sched-pred-84p5c, resource: bindings, ignored listing per whitelist
Sep  6 21:19:52.517: INFO: namespace e2e-tests-sched-pred-84p5c deletion completed in 14.164603558s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:22.530 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:19:52.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-t42tm.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-t42tm.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-t42tm.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-t42tm.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-t42tm.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-t42tm.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep  6 21:20:12.736: INFO: DNS probes using e2e-tests-dns-t42tm/dns-test-b419df73-f086-11ea-b72c-0242ac110008 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:20:12.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-t42tm" for this suite.
Sep  6 21:20:18.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:20:18.855: INFO: namespace: e2e-tests-dns-t42tm, resource: bindings, ignored listing per whitelist
Sep  6 21:20:18.875: INFO: namespace e2e-tests-dns-t42tm deletion completed in 6.091060829s

• [SLOW TEST:26.359 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:20:18.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:20:19.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-cf5sk" for this suite.
Sep  6 21:20:25.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:20:25.051: INFO: namespace: e2e-tests-services-cf5sk, resource: bindings, ignored listing per whitelist
Sep  6 21:20:25.117: INFO: namespace e2e-tests-services-cf5sk deletion completed in 6.09957902s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.241 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:20:25.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  6 21:20:25.244: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c784aa44-f086-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-nnmkz" to be "success or failure"
Sep  6 21:20:25.251: INFO: Pod "downwardapi-volume-c784aa44-f086-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.403766ms
Sep  6 21:20:27.255: INFO: Pod "downwardapi-volume-c784aa44-f086-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011217083s
Sep  6 21:20:29.259: INFO: Pod "downwardapi-volume-c784aa44-f086-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014950268s
STEP: Saw pod success
Sep  6 21:20:29.259: INFO: Pod "downwardapi-volume-c784aa44-f086-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:20:29.261: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c784aa44-f086-11ea-b72c-0242ac110008 container client-container: 
STEP: delete the pod
Sep  6 21:20:29.318: INFO: Waiting for pod downwardapi-volume-c784aa44-f086-11ea-b72c-0242ac110008 to disappear
Sep  6 21:20:29.325: INFO: Pod downwardapi-volume-c784aa44-f086-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:20:29.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nnmkz" for this suite.
Sep  6 21:20:35.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:20:35.362: INFO: namespace: e2e-tests-downward-api-nnmkz, resource: bindings, ignored listing per whitelist
Sep  6 21:20:35.417: INFO: namespace e2e-tests-downward-api-nnmkz deletion completed in 6.088228959s

• [SLOW TEST:10.300 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:20:35.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep  6 21:20:35.543: INFO: Waiting up to 5m0s for pod "pod-cdaffebc-f086-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-r2lmb" to be "success or failure"
Sep  6 21:20:35.559: INFO: Pod "pod-cdaffebc-f086-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.609433ms
Sep  6 21:20:37.563: INFO: Pod "pod-cdaffebc-f086-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020858489s
Sep  6 21:20:39.567: INFO: Pod "pod-cdaffebc-f086-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02441272s
STEP: Saw pod success
Sep  6 21:20:39.567: INFO: Pod "pod-cdaffebc-f086-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:20:39.570: INFO: Trying to get logs from node hunter-worker2 pod pod-cdaffebc-f086-11ea-b72c-0242ac110008 container test-container: 
STEP: delete the pod
Sep  6 21:20:39.622: INFO: Waiting for pod pod-cdaffebc-f086-11ea-b72c-0242ac110008 to disappear
Sep  6 21:20:39.625: INFO: Pod pod-cdaffebc-f086-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:20:39.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-r2lmb" for this suite.
Sep  6 21:20:45.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:20:45.689: INFO: namespace: e2e-tests-emptydir-r2lmb, resource: bindings, ignored listing per whitelist
Sep  6 21:20:45.732: INFO: namespace e2e-tests-emptydir-r2lmb deletion completed in 6.09340947s

• [SLOW TEST:10.314 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:20:45.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep  6 21:20:45.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-tvqdj'
Sep  6 21:20:45.964: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep  6 21:20:45.964: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Sep  6 21:20:45.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-tvqdj'
Sep  6 21:20:46.076: INFO: stderr: ""
Sep  6 21:20:46.076: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:20:46.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tvqdj" for this suite.
Sep  6 21:21:08.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:21:08.152: INFO: namespace: e2e-tests-kubectl-tvqdj, resource: bindings, ignored listing per whitelist
Sep  6 21:21:08.199: INFO: namespace e2e-tests-kubectl-tvqdj deletion completed in 22.120127503s

• [SLOW TEST:22.467 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:21:08.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  6 21:21:08.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e135e810-f086-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-v2zmv" to be "success or failure"
Sep  6 21:21:08.344: INFO: Pod "downwardapi-volume-e135e810-f086-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.950544ms
Sep  6 21:21:10.527: INFO: Pod "downwardapi-volume-e135e810-f086-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199755978s
Sep  6 21:21:12.532: INFO: Pod "downwardapi-volume-e135e810-f086-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.204120727s
STEP: Saw pod success
Sep  6 21:21:12.532: INFO: Pod "downwardapi-volume-e135e810-f086-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:21:12.535: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-e135e810-f086-11ea-b72c-0242ac110008 container client-container: 
STEP: delete the pod
Sep  6 21:21:12.737: INFO: Waiting for pod downwardapi-volume-e135e810-f086-11ea-b72c-0242ac110008 to disappear
Sep  6 21:21:12.757: INFO: Pod downwardapi-volume-e135e810-f086-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:21:12.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v2zmv" for this suite.
Sep  6 21:21:18.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:21:18.914: INFO: namespace: e2e-tests-projected-v2zmv, resource: bindings, ignored listing per whitelist
Sep  6 21:21:18.916: INFO: namespace e2e-tests-projected-v2zmv deletion completed in 6.155269969s

• [SLOW TEST:10.717 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:21:18.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Sep  6 21:21:19.024: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix492485535/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:21:19.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-d5vsr" for this suite.
Sep  6 21:21:25.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:21:25.127: INFO: namespace: e2e-tests-kubectl-d5vsr, resource: bindings, ignored listing per whitelist
Sep  6 21:21:25.187: INFO: namespace e2e-tests-kubectl-d5vsr deletion completed in 6.091824589s

• [SLOW TEST:6.271 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:21:25.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-5q979
Sep  6 21:21:31.304: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-5q979
STEP: checking the pod's current state and verifying that restartCount is present
Sep  6 21:21:31.307: INFO: Initial restart count of pod liveness-http is 0
Sep  6 21:21:55.358: INFO: Restart count of pod e2e-tests-container-probe-5q979/liveness-http is now 1 (24.051588493s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:21:55.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-5q979" for this suite.
Sep  6 21:22:01.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:22:01.441: INFO: namespace: e2e-tests-container-probe-5q979, resource: bindings, ignored listing per whitelist
Sep  6 21:22:01.475: INFO: namespace e2e-tests-container-probe-5q979 deletion completed in 6.093037329s

• [SLOW TEST:36.288 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:22:01.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-6b2r2
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-6b2r2
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-6b2r2
Sep  6 21:22:01.672: INFO: Found 0 stateful pods, waiting for 1
Sep  6 21:22:11.677: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Sep  6 21:22:11.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6b2r2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep  6 21:22:11.918: INFO: stderr: "I0906 21:22:11.806989    2120 log.go:172] (0xc0007fe2c0) (0xc000734640) Create stream\nI0906 21:22:11.807068    2120 log.go:172] (0xc0007fe2c0) (0xc000734640) Stream added, broadcasting: 1\nI0906 21:22:11.809377    2120 log.go:172] (0xc0007fe2c0) Reply frame received for 1\nI0906 21:22:11.809410    2120 log.go:172] (0xc0007fe2c0) (0xc0005d2c80) Create stream\nI0906 21:22:11.809418    2120 log.go:172] (0xc0007fe2c0) (0xc0005d2c80) Stream added, broadcasting: 3\nI0906 21:22:11.810521    2120 log.go:172] (0xc0007fe2c0) Reply frame received for 3\nI0906 21:22:11.810543    2120 log.go:172] (0xc0007fe2c0) (0xc0005d2dc0) Create stream\nI0906 21:22:11.810550    2120 log.go:172] (0xc0007fe2c0) (0xc0005d2dc0) Stream added, broadcasting: 5\nI0906 21:22:11.811355    2120 log.go:172] (0xc0007fe2c0) Reply frame received for 5\nI0906 21:22:11.913193    2120 log.go:172] (0xc0007fe2c0) Data frame received for 3\nI0906 21:22:11.913225    2120 log.go:172] (0xc0005d2c80) (3) Data frame handling\nI0906 21:22:11.913248    2120 log.go:172] (0xc0005d2c80) (3) Data frame sent\nI0906 21:22:11.913261    2120 log.go:172] (0xc0007fe2c0) Data frame received for 3\nI0906 21:22:11.913274    2120 log.go:172] (0xc0007fe2c0) Data frame received for 5\nI0906 21:22:11.913292    2120 log.go:172] (0xc0005d2dc0) (5) Data frame handling\nI0906 21:22:11.913318    2120 log.go:172] (0xc0005d2c80) (3) Data frame handling\nI0906 21:22:11.915680    2120 log.go:172] (0xc0007fe2c0) Data frame received for 1\nI0906 21:22:11.915695    2120 log.go:172] (0xc000734640) (1) Data frame handling\nI0906 21:22:11.915701    2120 log.go:172] (0xc000734640) (1) Data frame sent\nI0906 21:22:11.915712    2120 log.go:172] (0xc0007fe2c0) (0xc000734640) Stream removed, broadcasting: 1\nI0906 21:22:11.915775    2120 log.go:172] (0xc0007fe2c0) Go away received\nI0906 21:22:11.915878    2120 log.go:172] (0xc0007fe2c0) (0xc000734640) Stream removed, broadcasting: 1\nI0906 21:22:11.915939    2120 log.go:172] (0xc0007fe2c0) (0xc0005d2c80) Stream removed, broadcasting: 3\nI0906 21:22:11.915953    2120 log.go:172] (0xc0007fe2c0) (0xc0005d2dc0) Stream removed, broadcasting: 5\n"
Sep  6 21:22:11.918: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep  6 21:22:11.918: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep  6 21:22:11.923: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Sep  6 21:22:21.928: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep  6 21:22:21.928: INFO: Waiting for statefulset status.replicas updated to 0
Sep  6 21:22:22.031: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999579s
Sep  6 21:22:23.045: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.906306146s
Sep  6 21:22:24.048: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.893201015s
Sep  6 21:22:25.053: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.8894878s
Sep  6 21:22:26.058: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.884448031s
Sep  6 21:22:27.064: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.880038848s
Sep  6 21:22:28.069: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.874012645s
Sep  6 21:22:29.111: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.868985866s
Sep  6 21:22:30.115: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.827195204s
Sep  6 21:22:31.119: INFO: Verifying statefulset ss doesn't scale past 1 for another 822.505238ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-6b2r2
Sep  6 21:22:32.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6b2r2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  6 21:22:32.352: INFO: stderr: "I0906 21:22:32.260207    2143 log.go:172] (0xc000162790) (0xc00071a640) Create stream\nI0906 21:22:32.260304    2143 log.go:172] (0xc000162790) (0xc00071a640) Stream added, broadcasting: 1\nI0906 21:22:32.263098    2143 log.go:172] (0xc000162790) Reply frame received for 1\nI0906 21:22:32.263159    2143 log.go:172] (0xc000162790) (0xc00065ebe0) Create stream\nI0906 21:22:32.263175    2143 log.go:172] (0xc000162790) (0xc00065ebe0) Stream added, broadcasting: 3\nI0906 21:22:32.264275    2143 log.go:172] (0xc000162790) Reply frame received for 3\nI0906 21:22:32.264302    2143 log.go:172] (0xc000162790) (0xc00071a6e0) Create stream\nI0906 21:22:32.264310    2143 log.go:172] (0xc000162790) (0xc00071a6e0) Stream added, broadcasting: 5\nI0906 21:22:32.265264    2143 log.go:172] (0xc000162790) Reply frame received for 5\nI0906 21:22:32.345367    2143 log.go:172] (0xc000162790) Data frame received for 3\nI0906 21:22:32.345399    2143 log.go:172] (0xc00065ebe0) (3) Data frame handling\nI0906 21:22:32.345407    2143 log.go:172] (0xc00065ebe0) (3) Data frame sent\nI0906 21:22:32.345413    2143 log.go:172] (0xc000162790) Data frame received for 3\nI0906 21:22:32.345417    2143 log.go:172] (0xc00065ebe0) (3) Data frame handling\nI0906 21:22:32.345440    2143 log.go:172] (0xc000162790) Data frame received for 5\nI0906 21:22:32.345446    2143 log.go:172] (0xc00071a6e0) (5) Data frame handling\nI0906 21:22:32.348959    2143 log.go:172] (0xc000162790) Data frame received for 1\nI0906 21:22:32.348981    2143 log.go:172] (0xc00071a640) (1) Data frame handling\nI0906 21:22:32.348991    2143 log.go:172] (0xc00071a640) (1) Data frame sent\nI0906 21:22:32.349001    2143 log.go:172] (0xc000162790) (0xc00071a640) Stream removed, broadcasting: 1\nI0906 21:22:32.349011    2143 log.go:172] (0xc000162790) Go away received\nI0906 21:22:32.349333    2143 log.go:172] (0xc000162790) (0xc00071a640) Stream removed, broadcasting: 1\nI0906 21:22:32.349371    2143 log.go:172] (0xc000162790) (0xc00065ebe0) Stream removed, broadcasting: 3\nI0906 21:22:32.349393    2143 log.go:172] (0xc000162790) (0xc00071a6e0) Stream removed, broadcasting: 5\n"
Sep  6 21:22:32.352: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep  6 21:22:32.352: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep  6 21:22:32.356: INFO: Found 1 stateful pods, waiting for 3
Sep  6 21:22:42.360: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Sep  6 21:22:42.360: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Sep  6 21:22:42.360: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Sep  6 21:22:42.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6b2r2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep  6 21:22:42.576: INFO: stderr: "I0906 21:22:42.485472    2164 log.go:172] (0xc000738370) (0xc00077a640) Create stream\nI0906 21:22:42.485530    2164 log.go:172] (0xc000738370) (0xc00077a640) Stream added, broadcasting: 1\nI0906 21:22:42.488192    2164 log.go:172] (0xc000738370) Reply frame received for 1\nI0906 21:22:42.488243    2164 log.go:172] (0xc000738370) (0xc0006c0c80) Create stream\nI0906 21:22:42.488259    2164 log.go:172] (0xc000738370) (0xc0006c0c80) Stream added, broadcasting: 3\nI0906 21:22:42.489224    2164 log.go:172] (0xc000738370) Reply frame received for 3\nI0906 21:22:42.489264    2164 log.go:172] (0xc000738370) (0xc0006c0dc0) Create stream\nI0906 21:22:42.489276    2164 log.go:172] (0xc000738370) (0xc0006c0dc0) Stream added, broadcasting: 5\nI0906 21:22:42.490245    2164 log.go:172] (0xc000738370) Reply frame received for 5\nI0906 21:22:42.569063    2164 log.go:172] (0xc000738370) Data frame received for 3\nI0906 21:22:42.569114    2164 log.go:172] (0xc0006c0c80) (3) Data frame handling\nI0906 21:22:42.569156    2164 log.go:172] (0xc0006c0c80) (3) Data frame sent\nI0906 21:22:42.569209    2164 log.go:172] (0xc000738370) Data frame received for 3\nI0906 21:22:42.569232    2164 log.go:172] (0xc0006c0c80) (3) Data frame handling\nI0906 21:22:42.569851    2164 log.go:172] (0xc000738370) Data frame received for 5\nI0906 21:22:42.569898    2164 log.go:172] (0xc0006c0dc0) (5) Data frame handling\nI0906 21:22:42.572910    2164 log.go:172] (0xc000738370) Data frame received for 1\nI0906 21:22:42.572945    2164 log.go:172] (0xc00077a640) (1) Data frame handling\nI0906 21:22:42.572984    2164 log.go:172] (0xc00077a640) (1) Data frame sent\nI0906 21:22:42.573018    2164 log.go:172] (0xc000738370) (0xc00077a640) Stream removed, broadcasting: 1\nI0906 21:22:42.573048    2164 log.go:172] (0xc000738370) Go away received\nI0906 21:22:42.573360    2164 log.go:172] (0xc000738370) (0xc00077a640) Stream removed, broadcasting: 1\nI0906 21:22:42.573386    2164 log.go:172] (0xc000738370) (0xc0006c0c80) Stream removed, broadcasting: 3\nI0906 21:22:42.573400    2164 log.go:172] (0xc000738370) (0xc0006c0dc0) Stream removed, broadcasting: 5\n"
Sep  6 21:22:42.577: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep  6 21:22:42.577: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep  6 21:22:42.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6b2r2 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep  6 21:22:42.842: INFO: stderr: "I0906 21:22:42.709621    2186 log.go:172] (0xc0007ac2c0) (0xc000689540) Create stream\nI0906 21:22:42.709673    2186 log.go:172] (0xc0007ac2c0) (0xc000689540) Stream added, broadcasting: 1\nI0906 21:22:42.712146    2186 log.go:172] (0xc0007ac2c0) Reply frame received for 1\nI0906 21:22:42.712188    2186 log.go:172] (0xc0007ac2c0) (0xc00044a000) Create stream\nI0906 21:22:42.712200    2186 log.go:172] (0xc0007ac2c0) (0xc00044a000) Stream added, broadcasting: 3\nI0906 21:22:42.713398    2186 log.go:172] (0xc0007ac2c0) Reply frame received for 3\nI0906 21:22:42.713433    2186 log.go:172] (0xc0007ac2c0) (0xc00036a000) Create stream\nI0906 21:22:42.713443    2186 log.go:172] (0xc0007ac2c0) (0xc00036a000) Stream added, broadcasting: 5\nI0906 21:22:42.714568    2186 log.go:172] (0xc0007ac2c0) Reply frame received for 5\nI0906 21:22:42.836648    2186 log.go:172] (0xc0007ac2c0) Data frame received for 3\nI0906 21:22:42.836693    2186 log.go:172] (0xc00044a000) (3) Data frame handling\nI0906 21:22:42.836718    2186 log.go:172] (0xc00044a000) (3) Data frame sent\nI0906 21:22:42.836779    2186 log.go:172] (0xc0007ac2c0) Data frame received for 5\nI0906 21:22:42.836811    2186 log.go:172] (0xc00036a000) (5) Data frame handling\nI0906 21:22:42.836863    2186 log.go:172] (0xc0007ac2c0) Data frame received for 3\nI0906 21:22:42.836895    2186 log.go:172] (0xc00044a000) (3) Data frame handling\nI0906 21:22:42.838711    2186 log.go:172] (0xc0007ac2c0) Data frame received for 1\nI0906 21:22:42.838738    2186 log.go:172] (0xc000689540) (1) Data frame handling\nI0906 21:22:42.838761    2186 log.go:172] (0xc000689540) (1) Data frame sent\nI0906 21:22:42.838924    2186 log.go:172] (0xc0007ac2c0) (0xc000689540) Stream removed, broadcasting: 1\nI0906 21:22:42.838968    2186 log.go:172] (0xc0007ac2c0) Go away received\nI0906 21:22:42.839210    2186 log.go:172] (0xc0007ac2c0) (0xc000689540) Stream removed, broadcasting: 1\nI0906 21:22:42.839314    2186 log.go:172] (0xc0007ac2c0) (0xc00044a000) Stream removed, broadcasting: 3\nI0906 21:22:42.839343    2186 log.go:172] (0xc0007ac2c0) (0xc00036a000) Stream removed, broadcasting: 5\n"
Sep  6 21:22:42.843: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep  6 21:22:42.843: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep  6 21:22:42.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6b2r2 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep  6 21:22:43.084: INFO: stderr: "I0906 21:22:42.967626    2209 log.go:172] (0xc000138840) (0xc0005972c0) Create stream\nI0906 21:22:42.967823    2209 log.go:172] (0xc000138840) (0xc0005972c0) Stream added, broadcasting: 1\nI0906 21:22:42.977511    2209 log.go:172] (0xc000138840) Reply frame received for 1\nI0906 21:22:42.977649    2209 log.go:172] (0xc000138840) (0xc00077c000) Create stream\nI0906 21:22:42.977706    2209 log.go:172] (0xc000138840) (0xc00077c000) Stream added, broadcasting: 3\nI0906 21:22:42.985500    2209 log.go:172] (0xc000138840) Reply frame received for 3\nI0906 21:22:42.985654    2209 log.go:172] (0xc000138840) (0xc000574000) Create stream\nI0906 21:22:42.985721    2209 log.go:172] (0xc000138840) (0xc000574000) Stream added, broadcasting: 5\nI0906 21:22:42.990190    2209 log.go:172] (0xc000138840) Reply frame received for 5\nI0906 21:22:43.078647    2209 log.go:172] (0xc000138840) Data frame received for 3\nI0906 21:22:43.078718    2209 log.go:172] (0xc00077c000) (3) Data frame handling\nI0906 21:22:43.078751    2209 log.go:172] (0xc00077c000) (3) Data frame sent\nI0906 21:22:43.078798    2209 log.go:172] (0xc000138840) Data frame received for 5\nI0906 21:22:43.078821    2209 log.go:172] (0xc000574000) (5) Data frame handling\nI0906 21:22:43.078902    2209 log.go:172] (0xc000138840) Data frame received for 3\nI0906 21:22:43.078920    2209 log.go:172] (0xc00077c000) (3) Data frame handling\nI0906 21:22:43.081115    2209 log.go:172] (0xc000138840) Data frame received for 1\nI0906 21:22:43.081131    2209 log.go:172] (0xc0005972c0) (1) Data frame handling\nI0906 21:22:43.081139    2209 log.go:172] (0xc0005972c0) (1) Data frame sent\nI0906 21:22:43.081154    2209 log.go:172] (0xc000138840) (0xc0005972c0) Stream removed, broadcasting: 1\nI0906 21:22:43.081313    2209 log.go:172] (0xc000138840) (0xc0005972c0) Stream removed, broadcasting: 1\nI0906 21:22:43.081352    2209 log.go:172] (0xc000138840) Go away received\nI0906 21:22:43.081386    2209 log.go:172] (0xc000138840) (0xc00077c000) Stream removed, broadcasting: 3\nI0906 21:22:43.081415    2209 log.go:172] (0xc000138840) (0xc000574000) Stream removed, broadcasting: 5\n"
Sep  6 21:22:43.084: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep  6 21:22:43.085: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep  6 21:22:43.085: INFO: Waiting for statefulset status.replicas updated to 0
Sep  6 21:22:43.099: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Sep  6 21:22:53.145: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep  6 21:22:53.145: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Sep  6 21:22:53.145: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Sep  6 21:22:53.161: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999582s
Sep  6 21:22:54.165: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990108225s
Sep  6 21:22:55.170: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985798547s
Sep  6 21:22:56.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981001519s
Sep  6 21:22:57.180: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975745843s
Sep  6 21:22:58.185: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971105103s
Sep  6 21:22:59.211: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.964127094s
Sep  6 21:23:00.216: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.939740027s
Sep  6 21:23:01.221: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.934653493s
Sep  6 21:23:02.226: INFO: Verifying statefulset ss doesn't scale past 3 for another 929.816648ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-6b2r2
Sep  6 21:23:03.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6b2r2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  6 21:23:03.419: INFO: stderr: "I0906 21:23:03.360535    2231 log.go:172] (0xc000138840) (0xc0005cd400) Create stream\nI0906 21:23:03.360628    2231 log.go:172] (0xc000138840) (0xc0005cd400) Stream added, broadcasting: 1\nI0906 21:23:03.365744    2231 log.go:172] (0xc000138840) Reply frame received for 1\nI0906 21:23:03.365780    2231 log.go:172] (0xc000138840) (0xc0005cd4a0) Create stream\nI0906 21:23:03.365795    2231 log.go:172] (0xc000138840) (0xc0005cd4a0) Stream added, broadcasting: 3\nI0906 21:23:03.366506    2231 log.go:172] (0xc000138840) Reply frame received for 3\nI0906 21:23:03.366537    2231 log.go:172] (0xc000138840) (0xc0005cd540) Create stream\nI0906 21:23:03.366553    2231 log.go:172] (0xc000138840) (0xc0005cd540) Stream added, broadcasting: 5\nI0906 21:23:03.367182    2231 log.go:172] (0xc000138840) Reply frame received for 5\nI0906 21:23:03.415263    2231 log.go:172] (0xc000138840) Data frame received for 5\nI0906 21:23:03.415312    2231 log.go:172] (0xc0005cd540) (5) Data frame handling\nI0906 21:23:03.415336    2231 log.go:172] (0xc000138840) Data frame received for 3\nI0906 21:23:03.415350    2231 log.go:172] (0xc0005cd4a0) (3) Data frame handling\nI0906 21:23:03.415363    2231 log.go:172] (0xc0005cd4a0) (3) Data frame sent\nI0906 21:23:03.415377    2231 log.go:172] (0xc000138840) Data frame received for 3\nI0906 21:23:03.415390    2231 log.go:172] (0xc0005cd4a0) (3) Data frame handling\nI0906 21:23:03.417155    2231 log.go:172] (0xc000138840) Data frame received for 1\nI0906 21:23:03.417190    2231 log.go:172] (0xc0005cd400) (1) Data frame handling\nI0906 21:23:03.417231    2231 log.go:172] (0xc0005cd400) (1) Data frame sent\nI0906 21:23:03.417265    2231 log.go:172] (0xc000138840) (0xc0005cd400) Stream removed, broadcasting: 1\nI0906 21:23:03.417313    2231 log.go:172] (0xc000138840) Go away received\nI0906 21:23:03.417458    2231 log.go:172] (0xc000138840) (0xc0005cd400) Stream removed, broadcasting: 1\nI0906 21:23:03.417477    2231 log.go:172] (0xc000138840) (0xc0005cd4a0) Stream removed, broadcasting: 3\nI0906 21:23:03.417483    2231 log.go:172] (0xc000138840) (0xc0005cd540) Stream removed, broadcasting: 5\n"
Sep  6 21:23:03.420: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep  6 21:23:03.420: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep  6 21:23:03.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6b2r2 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  6 21:23:03.614: INFO: stderr: "I0906 21:23:03.555519    2254 log.go:172] (0xc0001386e0) (0xc00072c640) Create stream\nI0906 21:23:03.555583    2254 log.go:172] (0xc0001386e0) (0xc00072c640) Stream added, broadcasting: 1\nI0906 21:23:03.557926    2254 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0906 21:23:03.557978    2254 log.go:172] (0xc0001386e0) (0xc00061edc0) Create stream\nI0906 21:23:03.557992    2254 log.go:172] (0xc0001386e0) (0xc00061edc0) Stream added, broadcasting: 3\nI0906 21:23:03.558844    2254 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0906 21:23:03.558876    2254 log.go:172] (0xc0001386e0) (0xc00072c6e0) Create stream\nI0906 21:23:03.558885    2254 log.go:172] (0xc0001386e0) (0xc00072c6e0) Stream added, broadcasting: 5\nI0906 21:23:03.559884    2254 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0906 21:23:03.610475    2254 log.go:172] (0xc0001386e0) Data frame received for 5\nI0906 21:23:03.610516    2254 log.go:172] (0xc00072c6e0) (5) Data frame handling\nI0906 21:23:03.610540    2254 log.go:172] (0xc0001386e0) Data frame received for 3\nI0906 21:23:03.610548    2254 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0906 21:23:03.610559    2254 log.go:172] (0xc00061edc0) (3) Data frame sent\nI0906 21:23:03.610567    2254 log.go:172] (0xc0001386e0) Data frame received for 3\nI0906 21:23:03.610575    2254 log.go:172] (0xc00061edc0) (3) Data frame handling\nI0906 21:23:03.611665    2254 log.go:172] (0xc0001386e0) Data frame received for 1\nI0906 21:23:03.611701    2254 log.go:172] (0xc00072c640) (1) Data frame handling\nI0906 21:23:03.611715    2254 log.go:172] (0xc00072c640) (1) Data frame sent\nI0906 21:23:03.611742    2254 log.go:172] (0xc0001386e0) (0xc00072c640) Stream removed, broadcasting: 1\nI0906 21:23:03.611769    2254 log.go:172] (0xc0001386e0) Go away received\nI0906 21:23:03.611956    2254 log.go:172] (0xc0001386e0) (0xc00072c640) Stream removed, broadcasting: 1\nI0906 21:23:03.611976    2254 log.go:172] (0xc0001386e0) (0xc00061edc0) Stream removed, broadcasting: 3\nI0906 21:23:03.611986    2254 log.go:172] (0xc0001386e0) (0xc00072c6e0) Stream removed, broadcasting: 5\n"
Sep  6 21:23:03.614: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep  6 21:23:03.614: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep  6 21:23:03.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6b2r2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  6 21:23:03.817: INFO: stderr: "I0906 21:23:03.746456    2276 log.go:172] (0xc0006e6420) (0xc0006a8640) Create stream\nI0906 21:23:03.746520    2276 log.go:172] (0xc0006e6420) (0xc0006a8640) Stream added, broadcasting: 1\nI0906 21:23:03.749371    2276 log.go:172] (0xc0006e6420) Reply frame received for 1\nI0906 21:23:03.749441    2276 log.go:172] (0xc0006e6420) (0xc0006a86e0) Create stream\nI0906 21:23:03.749465    2276 log.go:172] (0xc0006e6420) (0xc0006a86e0) Stream added, broadcasting: 3\nI0906 21:23:03.750527    2276 log.go:172] (0xc0006e6420) Reply frame received for 3\nI0906 21:23:03.750579    2276 log.go:172] (0xc0006e6420) (0xc000122be0) Create stream\nI0906 21:23:03.750600    2276 log.go:172] (0xc0006e6420) (0xc000122be0) Stream added, broadcasting: 5\nI0906 21:23:03.751601    2276 log.go:172] (0xc0006e6420) Reply frame received for 5\nI0906 21:23:03.813314    2276 log.go:172] (0xc0006e6420) Data frame received for 5\nI0906 21:23:03.813352    2276 log.go:172] (0xc000122be0) (5) Data frame handling\nI0906 21:23:03.813388    2276 log.go:172] (0xc0006e6420) Data frame received for 3\nI0906 21:23:03.813438    2276 log.go:172] (0xc0006a86e0) (3) Data frame handling\nI0906 21:23:03.813470    2276 log.go:172] (0xc0006a86e0) (3) Data frame sent\nI0906 21:23:03.813492    2276 log.go:172] (0xc0006e6420) Data frame received for 3\nI0906 21:23:03.813512    2276 log.go:172] (0xc0006a86e0) (3) Data frame handling\nI0906 21:23:03.815084    2276 log.go:172] (0xc0006e6420) Data frame received for 1\nI0906 21:23:03.815098    2276 log.go:172] (0xc0006a8640) (1) Data frame handling\nI0906 21:23:03.815108    2276 log.go:172] (0xc0006a8640) (1) Data frame sent\nI0906 21:23:03.815233    2276 log.go:172] (0xc0006e6420) (0xc0006a8640) Stream removed, broadcasting: 1\nI0906 21:23:03.815437    2276 log.go:172] (0xc0006e6420) (0xc0006a8640) Stream removed, broadcasting: 1\nI0906 21:23:03.815457    2276 log.go:172] (0xc0006e6420) (0xc0006a86e0) Stream removed, broadcasting: 3\nI0906 21:23:03.815569    2276 log.go:172] (0xc0006e6420) Go away received\nI0906 21:23:03.815678    2276 log.go:172] (0xc0006e6420) (0xc000122be0) Stream removed, broadcasting: 5\n"
Sep  6 21:23:03.818: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep  6 21:23:03.818: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep  6 21:23:03.818: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Sep  6 21:23:23.833: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6b2r2
Sep  6 21:23:23.836: INFO: Scaling statefulset ss to 0
Sep  6 21:23:23.845: INFO: Waiting for statefulset status.replicas updated to 0
Sep  6 21:23:23.847: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:23:23.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-6b2r2" for this suite.
Sep  6 21:23:29.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:23:29.895: INFO: namespace: e2e-tests-statefulset-6b2r2, resource: bindings, ignored listing per whitelist
Sep  6 21:23:29.951: INFO: namespace e2e-tests-statefulset-6b2r2 deletion completed in 6.083328489s

• [SLOW TEST:88.477 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:23:29.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0906 21:24:00.597286       7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep  6 21:24:00.597: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:24:00.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-jl8cf" for this suite.
Sep  6 21:24:06.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:24:06.666: INFO: namespace: e2e-tests-gc-jl8cf, resource: bindings, ignored listing per whitelist
Sep  6 21:24:06.692: INFO: namespace e2e-tests-gc-jl8cf deletion completed in 6.091420216s

• [SLOW TEST:36.740 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:24:06.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-4bad7a14-f087-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume secrets
Sep  6 21:24:06.937: INFO: Waiting up to 5m0s for pod "pod-secrets-4bae1147-f087-11ea-b72c-0242ac110008" in namespace "e2e-tests-secrets-cs2dm" to be "success or failure"
Sep  6 21:24:06.954: INFO: Pod "pod-secrets-4bae1147-f087-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.837602ms
Sep  6 21:24:08.958: INFO: Pod "pod-secrets-4bae1147-f087-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020608673s
Sep  6 21:24:10.961: INFO: Pod "pod-secrets-4bae1147-f087-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024466158s
STEP: Saw pod success
Sep  6 21:24:10.961: INFO: Pod "pod-secrets-4bae1147-f087-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:24:10.964: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-4bae1147-f087-11ea-b72c-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Sep  6 21:24:11.006: INFO: Waiting for pod pod-secrets-4bae1147-f087-11ea-b72c-0242ac110008 to disappear
Sep  6 21:24:11.042: INFO: Pod pod-secrets-4bae1147-f087-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:24:11.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-cs2dm" for this suite.
Sep  6 21:24:17.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:24:17.132: INFO: namespace: e2e-tests-secrets-cs2dm, resource: bindings, ignored listing per whitelist
Sep  6 21:24:17.189: INFO: namespace e2e-tests-secrets-cs2dm deletion completed in 6.14272591s

• [SLOW TEST:10.497 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:24:17.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  6 21:24:17.293: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51dbf4fb-f087-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-2dvxz" to be "success or failure"
Sep  6 21:24:17.307: INFO: Pod "downwardapi-volume-51dbf4fb-f087-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.731168ms
Sep  6 21:24:19.345: INFO: Pod "downwardapi-volume-51dbf4fb-f087-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052045848s
Sep  6 21:24:21.349: INFO: Pod "downwardapi-volume-51dbf4fb-f087-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055949476s
STEP: Saw pod success
Sep  6 21:24:21.349: INFO: Pod "downwardapi-volume-51dbf4fb-f087-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:24:21.352: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-51dbf4fb-f087-11ea-b72c-0242ac110008 container client-container: 
STEP: delete the pod
Sep  6 21:24:21.385: INFO: Waiting for pod downwardapi-volume-51dbf4fb-f087-11ea-b72c-0242ac110008 to disappear
Sep  6 21:24:21.399: INFO: Pod downwardapi-volume-51dbf4fb-f087-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:24:21.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2dvxz" for this suite.
Sep  6 21:24:27.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:24:27.443: INFO: namespace: e2e-tests-downward-api-2dvxz, resource: bindings, ignored listing per whitelist
Sep  6 21:24:27.524: INFO: namespace e2e-tests-downward-api-2dvxz deletion completed in 6.122169021s

• [SLOW TEST:10.335 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:24:27.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Sep  6 21:24:27.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:24:30.546: INFO: stderr: ""
Sep  6 21:24:30.546: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  6 21:24:30.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:24:30.685: INFO: stderr: ""
Sep  6 21:24:30.685: INFO: stdout: "update-demo-nautilus-fgcst update-demo-nautilus-frmnk "
Sep  6 21:24:30.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fgcst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:24:30.787: INFO: stderr: ""
Sep  6 21:24:30.787: INFO: stdout: ""
Sep  6 21:24:30.787: INFO: update-demo-nautilus-fgcst is created but not running
Sep  6 21:24:35.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:24:35.889: INFO: stderr: ""
Sep  6 21:24:35.889: INFO: stdout: "update-demo-nautilus-fgcst update-demo-nautilus-frmnk "
Sep  6 21:24:35.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fgcst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:24:35.973: INFO: stderr: ""
Sep  6 21:24:35.973: INFO: stdout: ""
Sep  6 21:24:35.973: INFO: update-demo-nautilus-fgcst is created but not running
Sep  6 21:24:40.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:24:41.072: INFO: stderr: ""
Sep  6 21:24:41.072: INFO: stdout: "update-demo-nautilus-fgcst update-demo-nautilus-frmnk "
Sep  6 21:24:41.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fgcst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:24:41.178: INFO: stderr: ""
Sep  6 21:24:41.178: INFO: stdout: "true"
Sep  6 21:24:41.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fgcst -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:24:41.273: INFO: stderr: ""
Sep  6 21:24:41.273: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  6 21:24:41.274: INFO: validating pod update-demo-nautilus-fgcst
Sep  6 21:24:41.278: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  6 21:24:41.278: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  6 21:24:41.278: INFO: update-demo-nautilus-fgcst is verified up and running
Sep  6 21:24:41.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-frmnk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:24:41.366: INFO: stderr: ""
Sep  6 21:24:41.366: INFO: stdout: "true"
Sep  6 21:24:41.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-frmnk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:24:41.454: INFO: stderr: ""
Sep  6 21:24:41.454: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  6 21:24:41.454: INFO: validating pod update-demo-nautilus-frmnk
Sep  6 21:24:41.458: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  6 21:24:41.458: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  6 21:24:41.458: INFO: update-demo-nautilus-frmnk is verified up and running
STEP: rolling-update to new replication controller
Sep  6 21:24:41.461: INFO: scanned /root for discovery docs: 
Sep  6 21:24:41.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:25:08.247: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Sep  6 21:25:08.247: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  6 21:25:08.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:25:08.328: INFO: stderr: ""
Sep  6 21:25:08.328: INFO: stdout: "update-demo-kitten-ktj6f update-demo-kitten-szdp9 "
Sep  6 21:25:08.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ktj6f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:25:08.414: INFO: stderr: ""
Sep  6 21:25:08.414: INFO: stdout: "true"
Sep  6 21:25:08.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ktj6f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:25:08.511: INFO: stderr: ""
Sep  6 21:25:08.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Sep  6 21:25:08.511: INFO: validating pod update-demo-kitten-ktj6f
Sep  6 21:25:08.515: INFO: got data: {
  "image": "kitten.jpg"
}

Sep  6 21:25:08.515: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Sep  6 21:25:08.515: INFO: update-demo-kitten-ktj6f is verified up and running
Sep  6 21:25:08.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-szdp9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:25:08.608: INFO: stderr: ""
Sep  6 21:25:08.608: INFO: stdout: "true"
Sep  6 21:25:08.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-szdp9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbn8v'
Sep  6 21:25:08.713: INFO: stderr: ""
Sep  6 21:25:08.713: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Sep  6 21:25:08.713: INFO: validating pod update-demo-kitten-szdp9
Sep  6 21:25:08.717: INFO: got data: {
  "image": "kitten.jpg"
}

Sep  6 21:25:08.717: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Sep  6 21:25:08.717: INFO: update-demo-kitten-szdp9 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:25:08.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gbn8v" for this suite.
Sep  6 21:25:32.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:25:32.749: INFO: namespace: e2e-tests-kubectl-gbn8v, resource: bindings, ignored listing per whitelist
Sep  6 21:25:32.808: INFO: namespace e2e-tests-kubectl-gbn8v deletion completed in 24.086910214s

• [SLOW TEST:65.284 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:25:32.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  6 21:25:32.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7eee9a89-f087-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-49bwt" to be "success or failure"
Sep  6 21:25:32.968: INFO: Pod "downwardapi-volume-7eee9a89-f087-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 45.941472ms
Sep  6 21:25:34.972: INFO: Pod "downwardapi-volume-7eee9a89-f087-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050504527s
Sep  6 21:25:36.977: INFO: Pod "downwardapi-volume-7eee9a89-f087-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055384579s
STEP: Saw pod success
Sep  6 21:25:36.977: INFO: Pod "downwardapi-volume-7eee9a89-f087-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:25:36.979: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-7eee9a89-f087-11ea-b72c-0242ac110008 container client-container: 
STEP: delete the pod
Sep  6 21:25:37.005: INFO: Waiting for pod downwardapi-volume-7eee9a89-f087-11ea-b72c-0242ac110008 to disappear
Sep  6 21:25:37.015: INFO: Pod downwardapi-volume-7eee9a89-f087-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:25:37.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-49bwt" for this suite.
Sep  6 21:25:43.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:25:43.133: INFO: namespace: e2e-tests-downward-api-49bwt, resource: bindings, ignored listing per whitelist
Sep  6 21:25:43.207: INFO: namespace e2e-tests-downward-api-49bwt deletion completed in 6.18851925s

• [SLOW TEST:10.399 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:25:43.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  6 21:25:43.288: INFO: Creating deployment "test-recreate-deployment"
Sep  6 21:25:43.304: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Sep  6 21:25:43.335: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Sep  6 21:25:45.342: INFO: Waiting deployment "test-recreate-deployment" to complete
Sep  6 21:25:45.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735024343, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735024343, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735024343, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735024343, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  6 21:25:47.348: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Sep  6 21:25:47.355: INFO: Updating deployment test-recreate-deployment
Sep  6 21:25:47.355: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Sep  6 21:25:48.000: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-g2tgj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g2tgj/deployments/test-recreate-deployment,UID:851f467c-f087-11ea-b060-0242ac120006,ResourceVersion:223732,Generation:2,CreationTimestamp:2020-09-06 21:25:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-09-06 21:25:47 +0000 UTC 2020-09-06 21:25:47 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-09-06 21:25:47 +0000 UTC 2020-09-06 21:25:43 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Sep  6 21:25:48.012: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-g2tgj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g2tgj/replicasets/test-recreate-deployment-589c4bfd,UID:87a33301-f087-11ea-b060-0242ac120006,ResourceVersion:223728,Generation:1,CreationTimestamp:2020-09-06 21:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 851f467c-f087-11ea-b060-0242ac120006 0xc00184f33f 0xc00184f350}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep  6 21:25:48.012: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Sep  6 21:25:48.012: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-g2tgj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g2tgj/replicasets/test-recreate-deployment-5bf7f65dc,UID:85265813-f087-11ea-b060-0242ac120006,ResourceVersion:223720,Generation:2,CreationTimestamp:2020-09-06 21:25:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 851f467c-f087-11ea-b060-0242ac120006 0xc00184f5f0 0xc00184f5f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep  6 21:25:48.016: INFO: Pod "test-recreate-deployment-589c4bfd-pff79" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-pff79,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-g2tgj,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g2tgj/pods/test-recreate-deployment-589c4bfd-pff79,UID:87ac3cd4-f087-11ea-b060-0242ac120006,ResourceVersion:223733,Generation:0,CreationTimestamp:2020-09-06 21:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 87a33301-f087-11ea-b060-0242ac120006 0xc0017007af 0xc0017007c0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngxwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngxwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ngxwc true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001700aa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001700ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 21:25:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-06 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 21:25:47 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-06 21:25:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:25:48.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-g2tgj" for this suite.
Sep  6 21:25:54.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:25:54.238: INFO: namespace: e2e-tests-deployment-g2tgj, resource: bindings, ignored listing per whitelist
Sep  6 21:25:54.260: INFO: namespace e2e-tests-deployment-g2tgj deletion completed in 6.240411264s

• [SLOW TEST:11.053 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:25:54.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-94jcn/secret-test-8bc3a18d-f087-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume secrets
Sep  6 21:25:54.449: INFO: Waiting up to 5m0s for pod "pod-configmaps-8bc4f6c0-f087-11ea-b72c-0242ac110008" in namespace "e2e-tests-secrets-94jcn" to be "success or failure"
Sep  6 21:25:54.525: INFO: Pod "pod-configmaps-8bc4f6c0-f087-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 75.990677ms
Sep  6 21:25:56.529: INFO: Pod "pod-configmaps-8bc4f6c0-f087-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079912182s
Sep  6 21:25:58.534: INFO: Pod "pod-configmaps-8bc4f6c0-f087-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084789803s
STEP: Saw pod success
Sep  6 21:25:58.534: INFO: Pod "pod-configmaps-8bc4f6c0-f087-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:25:58.537: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-8bc4f6c0-f087-11ea-b72c-0242ac110008 container env-test: 
STEP: delete the pod
Sep  6 21:25:58.577: INFO: Waiting for pod pod-configmaps-8bc4f6c0-f087-11ea-b72c-0242ac110008 to disappear
Sep  6 21:25:58.585: INFO: Pod pod-configmaps-8bc4f6c0-f087-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:25:58.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-94jcn" for this suite.
Sep  6 21:26:04.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:26:04.666: INFO: namespace: e2e-tests-secrets-94jcn, resource: bindings, ignored listing per whitelist
Sep  6 21:26:04.683: INFO: namespace e2e-tests-secrets-94jcn deletion completed in 6.095059325s

• [SLOW TEST:10.423 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:26:04.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-91f11528-f087-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume configMaps
Sep  6 21:26:04.851: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-91f1bc0e-f087-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-mqzvs" to be "success or failure"
Sep  6 21:26:04.854: INFO: Pod "pod-projected-configmaps-91f1bc0e-f087-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.461989ms
Sep  6 21:26:06.859: INFO: Pod "pod-projected-configmaps-91f1bc0e-f087-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007694768s
Sep  6 21:26:08.862: INFO: Pod "pod-projected-configmaps-91f1bc0e-f087-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010704708s
STEP: Saw pod success
Sep  6 21:26:08.862: INFO: Pod "pod-projected-configmaps-91f1bc0e-f087-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:26:08.864: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-91f1bc0e-f087-11ea-b72c-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  6 21:26:08.919: INFO: Waiting for pod pod-projected-configmaps-91f1bc0e-f087-11ea-b72c-0242ac110008 to disappear
Sep  6 21:26:09.004: INFO: Pod pod-projected-configmaps-91f1bc0e-f087-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:26:09.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mqzvs" for this suite.
Sep  6 21:26:15.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:26:15.068: INFO: namespace: e2e-tests-projected-mqzvs, resource: bindings, ignored listing per whitelist
Sep  6 21:26:15.103: INFO: namespace e2e-tests-projected-mqzvs deletion completed in 6.095093351s

• [SLOW TEST:10.419 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:26:15.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:26:19.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-jj56h" for this suite.
Sep  6 21:26:25.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:26:25.340: INFO: namespace: e2e-tests-kubelet-test-jj56h, resource: bindings, ignored listing per whitelist
Sep  6 21:26:25.377: INFO: namespace e2e-tests-kubelet-test-jj56h deletion completed in 6.123497545s

• [SLOW TEST:10.274 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:26:25.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-tnmx9
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-tnmx9
STEP: Deleting pre-stop pod
Sep  6 21:26:42.592: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:26:42.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-tnmx9" for this suite.
Sep  6 21:27:20.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:27:20.656: INFO: namespace: e2e-tests-prestop-tnmx9, resource: bindings, ignored listing per whitelist
Sep  6 21:27:20.714: INFO: namespace e2e-tests-prestop-tnmx9 deletion completed in 38.115245716s

• [SLOW TEST:55.338 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:27:20.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Sep  6 21:27:20.852: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-2ntm8" to be "success or failure"
Sep  6 21:27:20.856: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061343ms
Sep  6 21:27:22.868: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016517817s
Sep  6 21:27:24.872: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019921389s
STEP: Saw pod success
Sep  6 21:27:24.872: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Sep  6 21:27:24.875: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Sep  6 21:27:24.894: INFO: Waiting for pod pod-host-path-test to disappear
Sep  6 21:27:24.898: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:27:24.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-2ntm8" for this suite.
Sep  6 21:27:30.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:27:31.028: INFO: namespace: e2e-tests-hostpath-2ntm8, resource: bindings, ignored listing per whitelist
Sep  6 21:27:31.041: INFO: namespace e2e-tests-hostpath-2ntm8 deletion completed in 6.112878183s

• [SLOW TEST:10.326 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:27:31.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-c56a3aa6-f087-11ea-b72c-0242ac110008
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-c56a3aa6-f087-11ea-b72c-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:28:53.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-td794" for this suite.
Sep  6 21:29:15.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:29:15.559: INFO: namespace: e2e-tests-configmap-td794, resource: bindings, ignored listing per whitelist
Sep  6 21:29:15.623: INFO: namespace e2e-tests-configmap-td794 deletion completed in 22.089863057s

• [SLOW TEST:104.582 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:29:15.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-03c7f4ca-f088-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume configMaps
Sep  6 21:29:15.796: INFO: Waiting up to 5m0s for pod "pod-configmaps-03c87675-f088-11ea-b72c-0242ac110008" in namespace "e2e-tests-configmap-dnljh" to be "success or failure"
Sep  6 21:29:15.816: INFO: Pod "pod-configmaps-03c87675-f088-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 20.455469ms
Sep  6 21:29:17.820: INFO: Pod "pod-configmaps-03c87675-f088-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023893486s
Sep  6 21:29:19.824: INFO: Pod "pod-configmaps-03c87675-f088-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027879182s
STEP: Saw pod success
Sep  6 21:29:19.824: INFO: Pod "pod-configmaps-03c87675-f088-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:29:19.827: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-03c87675-f088-11ea-b72c-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Sep  6 21:29:19.855: INFO: Waiting for pod pod-configmaps-03c87675-f088-11ea-b72c-0242ac110008 to disappear
Sep  6 21:29:19.859: INFO: Pod pod-configmaps-03c87675-f088-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:29:19.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-dnljh" for this suite.
Sep  6 21:29:25.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:29:25.967: INFO: namespace: e2e-tests-configmap-dnljh, resource: bindings, ignored listing per whitelist
Sep  6 21:29:25.973: INFO: namespace e2e-tests-configmap-dnljh deletion completed in 6.110268303s

• [SLOW TEST:10.350 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:29:25.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Sep  6 21:29:26.659: INFO: created pod pod-service-account-defaultsa
Sep  6 21:29:26.659: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Sep  6 21:29:26.693: INFO: created pod pod-service-account-mountsa
Sep  6 21:29:26.693: INFO: pod pod-service-account-mountsa service account token volume mount: true
Sep  6 21:29:26.710: INFO: created pod pod-service-account-nomountsa
Sep  6 21:29:26.710: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Sep  6 21:29:26.731: INFO: created pod pod-service-account-defaultsa-mountspec
Sep  6 21:29:26.731: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Sep  6 21:29:26.779: INFO: created pod pod-service-account-mountsa-mountspec
Sep  6 21:29:26.779: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Sep  6 21:29:26.785: INFO: created pod pod-service-account-nomountsa-mountspec
Sep  6 21:29:26.785: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Sep  6 21:29:26.828: INFO: created pod pod-service-account-defaultsa-nomountspec
Sep  6 21:29:26.828: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Sep  6 21:29:26.856: INFO: created pod pod-service-account-mountsa-nomountspec
Sep  6 21:29:26.856: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Sep  6 21:29:26.929: INFO: created pod pod-service-account-nomountsa-nomountspec
Sep  6 21:29:26.929: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:29:26.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-lrbmq" for this suite.
Sep  6 21:29:55.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:29:55.103: INFO: namespace: e2e-tests-svcaccounts-lrbmq, resource: bindings, ignored listing per whitelist
Sep  6 21:29:55.110: INFO: namespace e2e-tests-svcaccounts-lrbmq deletion completed in 28.176899701s

• [SLOW TEST:29.136 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:29:55.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  6 21:29:55.256: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Sep  6 21:29:55.259: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dzmvn/daemonsets","resourceVersion":"224527"},"items":null}

Sep  6 21:29:55.261: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dzmvn/pods","resourceVersion":"224527"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:29:55.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-dzmvn" for this suite.
Sep  6 21:30:01.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:30:01.361: INFO: namespace: e2e-tests-daemonsets-dzmvn, resource: bindings, ignored listing per whitelist
Sep  6 21:30:01.374: INFO: namespace e2e-tests-daemonsets-dzmvn deletion completed in 6.105079462s

S [SKIPPING] [6.264 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Sep  6 21:29:55.256: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:30:01.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  6 21:30:09.601: INFO: Waiting up to 5m0s for pod "client-envvars-23d1cd04-f088-11ea-b72c-0242ac110008" in namespace "e2e-tests-pods-ls854" to be "success or failure"
Sep  6 21:30:09.604: INFO: Pod "client-envvars-23d1cd04-f088-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.139192ms
Sep  6 21:30:11.608: INFO: Pod "client-envvars-23d1cd04-f088-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006844127s
Sep  6 21:30:13.612: INFO: Pod "client-envvars-23d1cd04-f088-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010781633s
STEP: Saw pod success
Sep  6 21:30:13.612: INFO: Pod "client-envvars-23d1cd04-f088-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:30:13.614: INFO: Trying to get logs from node hunter-worker pod client-envvars-23d1cd04-f088-11ea-b72c-0242ac110008 container env3cont: 
STEP: delete the pod
Sep  6 21:30:13.646: INFO: Waiting for pod client-envvars-23d1cd04-f088-11ea-b72c-0242ac110008 to disappear
Sep  6 21:30:13.663: INFO: Pod client-envvars-23d1cd04-f088-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:30:13.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ls854" for this suite.
Sep  6 21:30:53.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:30:53.738: INFO: namespace: e2e-tests-pods-ls854, resource: bindings, ignored listing per whitelist
Sep  6 21:30:53.758: INFO: namespace e2e-tests-pods-ls854 deletion completed in 40.090843798s

• [SLOW TEST:52.383 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:30:53.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep  6 21:30:53.862: INFO: Waiting up to 5m0s for pod "pod-3e3c201f-f088-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-knwgb" to be "success or failure"
Sep  6 21:30:53.880: INFO: Pod "pod-3e3c201f-f088-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.620904ms
Sep  6 21:30:55.883: INFO: Pod "pod-3e3c201f-f088-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021198628s
Sep  6 21:30:57.887: INFO: Pod "pod-3e3c201f-f088-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025169684s
STEP: Saw pod success
Sep  6 21:30:57.887: INFO: Pod "pod-3e3c201f-f088-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:30:57.891: INFO: Trying to get logs from node hunter-worker2 pod pod-3e3c201f-f088-11ea-b72c-0242ac110008 container test-container: 
STEP: delete the pod
Sep  6 21:30:57.923: INFO: Waiting for pod pod-3e3c201f-f088-11ea-b72c-0242ac110008 to disappear
Sep  6 21:30:57.939: INFO: Pod pod-3e3c201f-f088-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:30:57.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-knwgb" for this suite.
Sep  6 21:31:03.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:31:04.029: INFO: namespace: e2e-tests-emptydir-knwgb, resource: bindings, ignored listing per whitelist
Sep  6 21:31:04.055: INFO: namespace e2e-tests-emptydir-knwgb deletion completed in 6.111152124s

• [SLOW TEST:10.296 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:31:04.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:31:08.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-z9ql8" for this suite.
Sep  6 21:31:46.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:31:46.370: INFO: namespace: e2e-tests-kubelet-test-z9ql8, resource: bindings, ignored listing per whitelist
Sep  6 21:31:46.374: INFO: namespace e2e-tests-kubelet-test-z9ql8 deletion completed in 38.129420907s

• [SLOW TEST:42.320 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:31:46.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Sep  6 21:31:47.175: INFO: Pod name wrapped-volume-race-5df60bed-f088-11ea-b72c-0242ac110008: Found 0 pods out of 5
Sep  6 21:31:52.183: INFO: Pod name wrapped-volume-race-5df60bed-f088-11ea-b72c-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5df60bed-f088-11ea-b72c-0242ac110008 in namespace e2e-tests-emptydir-wrapper-2jpvn, will wait for the garbage collector to delete the pods
Sep  6 21:34:24.267: INFO: Deleting ReplicationController wrapped-volume-race-5df60bed-f088-11ea-b72c-0242ac110008 took: 6.696139ms
Sep  6 21:34:24.367: INFO: Terminating ReplicationController wrapped-volume-race-5df60bed-f088-11ea-b72c-0242ac110008 pods took: 100.293089ms
STEP: Creating RC which spawns configmap-volume pods
Sep  6 21:35:00.797: INFO: Pod name wrapped-volume-race-d167ba05-f088-11ea-b72c-0242ac110008: Found 0 pods out of 5
Sep  6 21:35:05.805: INFO: Pod name wrapped-volume-race-d167ba05-f088-11ea-b72c-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d167ba05-f088-11ea-b72c-0242ac110008 in namespace e2e-tests-emptydir-wrapper-2jpvn, will wait for the garbage collector to delete the pods
Sep  6 21:37:09.915: INFO: Deleting ReplicationController wrapped-volume-race-d167ba05-f088-11ea-b72c-0242ac110008 took: 29.234544ms
Sep  6 21:37:10.115: INFO: Terminating ReplicationController wrapped-volume-race-d167ba05-f088-11ea-b72c-0242ac110008 pods took: 200.332373ms
STEP: Creating RC which spawns configmap-volume pods
Sep  6 21:37:49.977: INFO: Pod name wrapped-volume-race-3639ad50-f089-11ea-b72c-0242ac110008: Found 0 pods out of 5
Sep  6 21:37:54.985: INFO: Pod name wrapped-volume-race-3639ad50-f089-11ea-b72c-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-3639ad50-f089-11ea-b72c-0242ac110008 in namespace e2e-tests-emptydir-wrapper-2jpvn, will wait for the garbage collector to delete the pods
Sep  6 21:40:29.070: INFO: Deleting ReplicationController wrapped-volume-race-3639ad50-f089-11ea-b72c-0242ac110008 took: 6.164361ms
Sep  6 21:40:29.270: INFO: Terminating ReplicationController wrapped-volume-race-3639ad50-f089-11ea-b72c-0242ac110008 pods took: 200.218646ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:41:11.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-2jpvn" for this suite.
Sep  6 21:41:19.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:41:19.209: INFO: namespace: e2e-tests-emptydir-wrapper-2jpvn, resource: bindings, ignored listing per whitelist
Sep  6 21:41:19.321: INFO: namespace e2e-tests-emptydir-wrapper-2jpvn deletion completed in 8.143306808s

• [SLOW TEST:572.946 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:41:19.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Sep  6 21:41:24.006: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b3175545-f089-11ea-b72c-0242ac110008"
Sep  6 21:41:24.006: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b3175545-f089-11ea-b72c-0242ac110008" in namespace "e2e-tests-pods-j5qwd" to be "terminated due to deadline exceeded"
Sep  6 21:41:24.065: INFO: Pod "pod-update-activedeadlineseconds-b3175545-f089-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 59.048973ms
Sep  6 21:41:26.069: INFO: Pod "pod-update-activedeadlineseconds-b3175545-f089-11ea-b72c-0242ac110008": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.063102018s
Sep  6 21:41:26.069: INFO: Pod "pod-update-activedeadlineseconds-b3175545-f089-11ea-b72c-0242ac110008" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:41:26.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-j5qwd" for this suite.
Sep  6 21:41:32.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:41:32.168: INFO: namespace: e2e-tests-pods-j5qwd, resource: bindings, ignored listing per whitelist
Sep  6 21:41:32.183: INFO: namespace e2e-tests-pods-j5qwd deletion completed in 6.110513749s

• [SLOW TEST:12.862 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:41:32.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Sep  6 21:41:36.857: INFO: Successfully updated pod "labelsupdatebac3ae90-f089-11ea-b72c-0242ac110008"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:41:38.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-45ghj" for this suite.
Sep  6 21:42:00.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:42:00.954: INFO: namespace: e2e-tests-downward-api-45ghj, resource: bindings, ignored listing per whitelist
Sep  6 21:42:01.054: INFO: namespace e2e-tests-downward-api-45ghj deletion completed in 22.141074765s

• [SLOW TEST:28.870 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:42:01.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  6 21:42:01.162: INFO: Creating ReplicaSet my-hostname-basic-cbfaf579-f089-11ea-b72c-0242ac110008
Sep  6 21:42:01.184: INFO: Pod name my-hostname-basic-cbfaf579-f089-11ea-b72c-0242ac110008: Found 0 pods out of 1
Sep  6 21:42:06.188: INFO: Pod name my-hostname-basic-cbfaf579-f089-11ea-b72c-0242ac110008: Found 1 pods out of 1
Sep  6 21:42:06.188: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-cbfaf579-f089-11ea-b72c-0242ac110008" is running
Sep  6 21:42:06.191: INFO: Pod "my-hostname-basic-cbfaf579-f089-11ea-b72c-0242ac110008-mcwqz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-06 21:42:01 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-06 21:42:03 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-06 21:42:03 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-06 21:42:01 +0000 UTC Reason: Message:}])
Sep  6 21:42:06.191: INFO: Trying to dial the pod
Sep  6 21:42:11.202: INFO: Controller my-hostname-basic-cbfaf579-f089-11ea-b72c-0242ac110008: Got expected result from replica 1 [my-hostname-basic-cbfaf579-f089-11ea-b72c-0242ac110008-mcwqz]: "my-hostname-basic-cbfaf579-f089-11ea-b72c-0242ac110008-mcwqz", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:42:11.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-z54lw" for this suite.
Sep  6 21:42:17.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:42:17.312: INFO: namespace: e2e-tests-replicaset-z54lw, resource: bindings, ignored listing per whitelist
Sep  6 21:42:17.328: INFO: namespace e2e-tests-replicaset-z54lw deletion completed in 6.122801786s

• [SLOW TEST:16.274 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:42:17.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-9l4nv
Sep  6 21:42:21.439: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-9l4nv
STEP: checking the pod's current state and verifying that restartCount is present
Sep  6 21:42:21.442: INFO: Initial restart count of pod liveness-exec is 0
Sep  6 21:43:11.547: INFO: Restart count of pod e2e-tests-container-probe-9l4nv/liveness-exec is now 1 (50.104864205s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:43:11.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9l4nv" for this suite.
Sep  6 21:43:17.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:43:17.587: INFO: namespace: e2e-tests-container-probe-9l4nv, resource: bindings, ignored listing per whitelist
Sep  6 21:43:17.655: INFO: namespace e2e-tests-container-probe-9l4nv deletion completed in 6.085994164s

• [SLOW TEST:60.326 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:43:17.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Sep  6 21:43:17.789: INFO: Waiting up to 5m0s for pod "var-expansion-f9a69317-f089-11ea-b72c-0242ac110008" in namespace "e2e-tests-var-expansion-snsrm" to be "success or failure"
Sep  6 21:43:17.802: INFO: Pod "var-expansion-f9a69317-f089-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.040607ms
Sep  6 21:43:19.899: INFO: Pod "var-expansion-f9a69317-f089-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109326316s
Sep  6 21:43:21.921: INFO: Pod "var-expansion-f9a69317-f089-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131920834s
STEP: Saw pod success
Sep  6 21:43:21.922: INFO: Pod "var-expansion-f9a69317-f089-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:43:21.924: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-f9a69317-f089-11ea-b72c-0242ac110008 container dapi-container: 
STEP: delete the pod
Sep  6 21:43:21.948: INFO: Waiting for pod var-expansion-f9a69317-f089-11ea-b72c-0242ac110008 to disappear
Sep  6 21:43:21.957: INFO: Pod var-expansion-f9a69317-f089-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:43:21.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-snsrm" for this suite.
Sep  6 21:43:27.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:43:28.040: INFO: namespace: e2e-tests-var-expansion-snsrm, resource: bindings, ignored listing per whitelist
Sep  6 21:43:28.046: INFO: namespace e2e-tests-var-expansion-snsrm deletion completed in 6.086021047s

• [SLOW TEST:10.391 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:43:28.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep  6 21:43:28.181: INFO: Waiting up to 5m0s for pod "pod-ffd55688-f089-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-4f7f6" to be "success or failure"
Sep  6 21:43:28.197: INFO: Pod "pod-ffd55688-f089-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.678947ms
Sep  6 21:43:30.201: INFO: Pod "pod-ffd55688-f089-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020536139s
Sep  6 21:43:32.205: INFO: Pod "pod-ffd55688-f089-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024659055s
STEP: Saw pod success
Sep  6 21:43:32.205: INFO: Pod "pod-ffd55688-f089-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:43:32.209: INFO: Trying to get logs from node hunter-worker2 pod pod-ffd55688-f089-11ea-b72c-0242ac110008 container test-container: 
STEP: delete the pod
Sep  6 21:43:32.250: INFO: Waiting for pod pod-ffd55688-f089-11ea-b72c-0242ac110008 to disappear
Sep  6 21:43:32.269: INFO: Pod pod-ffd55688-f089-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:43:32.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4f7f6" for this suite.
Sep  6 21:43:38.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:43:38.336: INFO: namespace: e2e-tests-emptydir-4f7f6, resource: bindings, ignored listing per whitelist
Sep  6 21:43:38.389: INFO: namespace e2e-tests-emptydir-4f7f6 deletion completed in 6.11645767s

• [SLOW TEST:10.342 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:43:38.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep  6 21:43:38.499: INFO: Waiting up to 5m0s for pod "pod-05fc65d8-f08a-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-wxsd7" to be "success or failure"
Sep  6 21:43:38.502: INFO: Pod "pod-05fc65d8-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.910874ms
Sep  6 21:43:40.515: INFO: Pod "pod-05fc65d8-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016157864s
Sep  6 21:43:42.519: INFO: Pod "pod-05fc65d8-f08a-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020307094s
STEP: Saw pod success
Sep  6 21:43:42.519: INFO: Pod "pod-05fc65d8-f08a-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:43:42.522: INFO: Trying to get logs from node hunter-worker2 pod pod-05fc65d8-f08a-11ea-b72c-0242ac110008 container test-container: 
STEP: delete the pod
Sep  6 21:43:42.583: INFO: Waiting for pod pod-05fc65d8-f08a-11ea-b72c-0242ac110008 to disappear
Sep  6 21:43:42.593: INFO: Pod pod-05fc65d8-f08a-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:43:42.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wxsd7" for this suite.
Sep  6 21:43:48.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:43:48.643: INFO: namespace: e2e-tests-emptydir-wxsd7, resource: bindings, ignored listing per whitelist
Sep  6 21:43:48.690: INFO: namespace e2e-tests-emptydir-wxsd7 deletion completed in 6.094124492s

• [SLOW TEST:10.301 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:43:48.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Sep  6 21:43:56.918: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  6 21:43:56.923: INFO: Pod pod-with-poststart-http-hook still exists
Sep  6 21:43:58.923: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  6 21:43:58.970: INFO: Pod pod-with-poststart-http-hook still exists
Sep  6 21:44:00.923: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  6 21:44:00.947: INFO: Pod pod-with-poststart-http-hook still exists
Sep  6 21:44:02.923: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  6 21:44:02.926: INFO: Pod pod-with-poststart-http-hook still exists
Sep  6 21:44:04.923: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  6 21:44:04.927: INFO: Pod pod-with-poststart-http-hook still exists
Sep  6 21:44:06.923: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  6 21:44:06.926: INFO: Pod pod-with-poststart-http-hook still exists
Sep  6 21:44:08.923: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  6 21:44:08.935: INFO: Pod pod-with-poststart-http-hook still exists
Sep  6 21:44:10.923: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  6 21:44:10.928: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:44:10.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-l47kk" for this suite.
Sep  6 21:44:32.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:44:33.043: INFO: namespace: e2e-tests-container-lifecycle-hook-l47kk, resource: bindings, ignored listing per whitelist
Sep  6 21:44:33.075: INFO: namespace e2e-tests-container-lifecycle-hook-l47kk deletion completed in 22.143692198s

• [SLOW TEST:44.385 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:44:33.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-vf8s
STEP: Creating a pod to test atomic-volume-subpath
Sep  6 21:44:33.201: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-vf8s" in namespace "e2e-tests-subpath-xrrwr" to be "success or failure"
Sep  6 21:44:33.218: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Pending", Reason="", readiness=false. Elapsed: 16.888754ms
Sep  6 21:44:35.222: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020667356s
Sep  6 21:44:37.225: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024225007s
Sep  6 21:44:39.229: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Running", Reason="", readiness=true. Elapsed: 6.02812419s
Sep  6 21:44:41.248: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Running", Reason="", readiness=false. Elapsed: 8.046764308s
Sep  6 21:44:43.252: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Running", Reason="", readiness=false. Elapsed: 10.051177323s
Sep  6 21:44:45.256: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Running", Reason="", readiness=false. Elapsed: 12.05491762s
Sep  6 21:44:47.260: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Running", Reason="", readiness=false. Elapsed: 14.059078127s
Sep  6 21:44:49.264: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Running", Reason="", readiness=false. Elapsed: 16.063395075s
Sep  6 21:44:51.281: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Running", Reason="", readiness=false. Elapsed: 18.080375998s
Sep  6 21:44:53.285: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Running", Reason="", readiness=false. Elapsed: 20.084453684s
Sep  6 21:44:55.290: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Running", Reason="", readiness=false. Elapsed: 22.088783106s
Sep  6 21:44:57.312: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Running", Reason="", readiness=false. Elapsed: 24.110749711s
Sep  6 21:44:59.324: INFO: Pod "pod-subpath-test-secret-vf8s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.122702285s
STEP: Saw pod success
Sep  6 21:44:59.324: INFO: Pod "pod-subpath-test-secret-vf8s" satisfied condition "success or failure"
Sep  6 21:44:59.326: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-vf8s container test-container-subpath-secret-vf8s: 
STEP: delete the pod
Sep  6 21:44:59.476: INFO: Waiting for pod pod-subpath-test-secret-vf8s to disappear
Sep  6 21:44:59.583: INFO: Pod pod-subpath-test-secret-vf8s no longer exists
STEP: Deleting pod pod-subpath-test-secret-vf8s
Sep  6 21:44:59.583: INFO: Deleting pod "pod-subpath-test-secret-vf8s" in namespace "e2e-tests-subpath-xrrwr"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:44:59.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-xrrwr" for this suite.
Sep  6 21:45:05.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:45:05.644: INFO: namespace: e2e-tests-subpath-xrrwr, resource: bindings, ignored listing per whitelist
Sep  6 21:45:05.717: INFO: namespace e2e-tests-subpath-xrrwr deletion completed in 6.127705712s

• [SLOW TEST:32.641 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:45:05.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  6 21:45:05.843: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-pknmm/configmap-test-3dceb9a3-f08a-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume configMaps
Sep  6 21:45:12.187: INFO: Waiting up to 5m0s for pod "pod-configmaps-3dd62f35-f08a-11ea-b72c-0242ac110008" in namespace "e2e-tests-configmap-pknmm" to be "success or failure"
Sep  6 21:45:12.212: INFO: Pod "pod-configmaps-3dd62f35-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 25.23324ms
Sep  6 21:45:14.216: INFO: Pod "pod-configmaps-3dd62f35-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028670344s
Sep  6 21:45:16.220: INFO: Pod "pod-configmaps-3dd62f35-f08a-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033021389s
STEP: Saw pod success
Sep  6 21:45:16.220: INFO: Pod "pod-configmaps-3dd62f35-f08a-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:45:16.223: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3dd62f35-f08a-11ea-b72c-0242ac110008 container env-test: 
STEP: delete the pod
Sep  6 21:45:16.242: INFO: Waiting for pod pod-configmaps-3dd62f35-f08a-11ea-b72c-0242ac110008 to disappear
Sep  6 21:45:16.253: INFO: Pod pod-configmaps-3dd62f35-f08a-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:45:16.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pknmm" for this suite.
Sep  6 21:45:22.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:45:22.349: INFO: namespace: e2e-tests-configmap-pknmm, resource: bindings, ignored listing per whitelist
Sep  6 21:45:22.384: INFO: namespace e2e-tests-configmap-pknmm deletion completed in 6.127637479s

• [SLOW TEST:10.365 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:45:22.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  6 21:45:22.475: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43f533f0-f08a-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-tkgh6" to be "success or failure"
Sep  6 21:45:22.486: INFO: Pod "downwardapi-volume-43f533f0-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.475754ms
Sep  6 21:45:24.490: INFO: Pod "downwardapi-volume-43f533f0-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015312155s
Sep  6 21:45:26.494: INFO: Pod "downwardapi-volume-43f533f0-f08a-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018987073s
STEP: Saw pod success
Sep  6 21:45:26.494: INFO: Pod "downwardapi-volume-43f533f0-f08a-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:45:26.497: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-43f533f0-f08a-11ea-b72c-0242ac110008 container client-container: 
STEP: delete the pod
Sep  6 21:45:26.533: INFO: Waiting for pod downwardapi-volume-43f533f0-f08a-11ea-b72c-0242ac110008 to disappear
Sep  6 21:45:26.540: INFO: Pod downwardapi-volume-43f533f0-f08a-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:45:26.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tkgh6" for this suite.
Sep  6 21:45:32.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:45:32.595: INFO: namespace: e2e-tests-projected-tkgh6, resource: bindings, ignored listing per whitelist
Sep  6 21:45:32.644: INFO: namespace e2e-tests-projected-tkgh6 deletion completed in 6.100492599s

• [SLOW TEST:10.260 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:45:32.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-6vlbf
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-6vlbf
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-6vlbf
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-6vlbf
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-6vlbf
Sep  6 21:45:36.853: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-6vlbf, name: ss-0, uid: 4a564c4b-f08a-11ea-b060-0242ac120006, status phase: Pending. Waiting for statefulset controller to delete.
Sep  6 21:45:39.428: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-6vlbf, name: ss-0, uid: 4a564c4b-f08a-11ea-b060-0242ac120006, status phase: Failed. Waiting for statefulset controller to delete.
Sep  6 21:45:39.435: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-6vlbf, name: ss-0, uid: 4a564c4b-f08a-11ea-b060-0242ac120006, status phase: Failed. Waiting for statefulset controller to delete.
Sep  6 21:45:39.470: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-6vlbf
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-6vlbf
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-6vlbf and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Sep  6 21:45:59.641: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6vlbf
Sep  6 21:45:59.644: INFO: Scaling statefulset ss to 0
Sep  6 21:46:09.678: INFO: Waiting for statefulset status.replicas updated to 0
Sep  6 21:46:09.681: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:46:09.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-6vlbf" for this suite.
Sep  6 21:46:15.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:46:15.775: INFO: namespace: e2e-tests-statefulset-6vlbf, resource: bindings, ignored listing per whitelist
Sep  6 21:46:15.814: INFO: namespace e2e-tests-statefulset-6vlbf deletion completed in 6.109470873s

• [SLOW TEST:43.170 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:46:15.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  6 21:46:15.987: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63d6050e-f08a-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-h9q97" to be "success or failure"
Sep  6 21:46:16.004: INFO: Pod "downwardapi-volume-63d6050e-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.563144ms
Sep  6 21:46:18.008: INFO: Pod "downwardapi-volume-63d6050e-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020273432s
Sep  6 21:46:20.015: INFO: Pod "downwardapi-volume-63d6050e-f08a-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027704643s
STEP: Saw pod success
Sep  6 21:46:20.015: INFO: Pod "downwardapi-volume-63d6050e-f08a-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:46:20.018: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-63d6050e-f08a-11ea-b72c-0242ac110008 container client-container: 
STEP: delete the pod
Sep  6 21:46:20.051: INFO: Waiting for pod downwardapi-volume-63d6050e-f08a-11ea-b72c-0242ac110008 to disappear
Sep  6 21:46:20.068: INFO: Pod downwardapi-volume-63d6050e-f08a-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:46:20.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h9q97" for this suite.
Sep  6 21:46:26.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:46:26.111: INFO: namespace: e2e-tests-projected-h9q97, resource: bindings, ignored listing per whitelist
Sep  6 21:46:26.166: INFO: namespace e2e-tests-projected-h9q97 deletion completed in 6.094699635s

• [SLOW TEST:10.351 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:46:26.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0906 21:46:36.289190       7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep  6 21:46:36.289: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:46:36.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-6zwnv" for this suite.
Sep  6 21:46:42.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:46:42.324: INFO: namespace: e2e-tests-gc-6zwnv, resource: bindings, ignored listing per whitelist
Sep  6 21:46:42.431: INFO: namespace e2e-tests-gc-6zwnv deletion completed in 6.138453031s

• [SLOW TEST:16.265 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:46:42.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-v9lr5 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-v9lr5;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-v9lr5 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-v9lr5;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-v9lr5.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-v9lr5.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-v9lr5.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-v9lr5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-v9lr5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-v9lr5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-v9lr5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-v9lr5.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-v9lr5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 28.102.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.102.28_udp@PTR;check="$$(dig +tcp +noall +answer +search 28.102.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.102.28_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-v9lr5 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-v9lr5;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-v9lr5 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-v9lr5.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-v9lr5.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-v9lr5.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-v9lr5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-v9lr5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-v9lr5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-v9lr5.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-v9lr5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 28.102.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.102.28_udp@PTR;check="$$(dig +tcp +noall +answer +search 28.102.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.102.28_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep  6 21:47:00.735: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:00.758: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:00.783: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:00.786: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:00.790: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-v9lr5 from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:00.792: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5 from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:00.796: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:00.799: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:00.828: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:00.831: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:00.845: INFO: Lookups using e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-v9lr5 jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5 jessie_udp@dns-test-service.e2e-tests-dns-v9lr5.svc jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc]

Sep  6 21:47:05.850: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:05.872: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:05.898: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:05.901: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:05.903: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-v9lr5 from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:05.906: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5 from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:05.908: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:05.910: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:05.913: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:05.915: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:05.933: INFO: Lookups using e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-v9lr5 jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5 jessie_udp@dns-test-service.e2e-tests-dns-v9lr5.svc jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc]

Sep  6 21:47:10.850: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:10.869: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:10.916: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:10.919: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:10.922: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-v9lr5 from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:10.925: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5 from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:10.929: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:10.932: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:10.935: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:10.938: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:10.957: INFO: Lookups using e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-v9lr5 jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5 jessie_udp@dns-test-service.e2e-tests-dns-v9lr5.svc jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc]

Sep  6 21:47:15.850: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:15.867: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:15.888: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:15.891: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:15.893: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-v9lr5 from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:15.896: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5 from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:15.899: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:15.901: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:15.904: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:15.906: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc from pod e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008: the server could not find the requested resource (get pods dns-test-73c692a6-f08a-11ea-b72c-0242ac110008)
Sep  6 21:47:15.923: INFO: Lookups using e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-v9lr5 jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5 jessie_udp@dns-test-service.e2e-tests-dns-v9lr5.svc jessie_tcp@dns-test-service.e2e-tests-dns-v9lr5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-v9lr5.svc]

Sep  6 21:47:20.946: INFO: DNS probes using e2e-tests-dns-v9lr5/dns-test-73c692a6-f08a-11ea-b72c-0242ac110008 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:47:21.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-v9lr5" for this suite.
Sep  6 21:47:27.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:47:27.377: INFO: namespace: e2e-tests-dns-v9lr5, resource: bindings, ignored listing per whitelist
Sep  6 21:47:27.413: INFO: namespace e2e-tests-dns-v9lr5 deletion completed in 6.329111212s

• [SLOW TEST:44.981 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:47:27.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Sep  6 21:47:27.559: INFO: Waiting up to 5m0s for pod "downward-api-8e849e0d-f08a-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-smp2t" to be "success or failure"
Sep  6 21:47:27.568: INFO: Pod "downward-api-8e849e0d-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.46857ms
Sep  6 21:47:29.572: INFO: Pod "downward-api-8e849e0d-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013640594s
Sep  6 21:47:31.577: INFO: Pod "downward-api-8e849e0d-f08a-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017952025s
STEP: Saw pod success
Sep  6 21:47:31.577: INFO: Pod "downward-api-8e849e0d-f08a-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:47:31.579: INFO: Trying to get logs from node hunter-worker2 pod downward-api-8e849e0d-f08a-11ea-b72c-0242ac110008 container dapi-container: 
STEP: delete the pod
Sep  6 21:47:31.594: INFO: Waiting for pod downward-api-8e849e0d-f08a-11ea-b72c-0242ac110008 to disappear
Sep  6 21:47:31.604: INFO: Pod downward-api-8e849e0d-f08a-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:47:31.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-smp2t" for this suite.
Sep  6 21:47:37.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:47:37.630: INFO: namespace: e2e-tests-downward-api-smp2t, resource: bindings, ignored listing per whitelist
Sep  6 21:47:37.688: INFO: namespace e2e-tests-downward-api-smp2t deletion completed in 6.080894817s

• [SLOW TEST:10.275 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:47:37.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-94a29c23-f08a-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume secrets
Sep  6 21:47:37.820: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-94a30d08-f08a-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-hs8p2" to be "success or failure"
Sep  6 21:47:37.824: INFO: Pod "pod-projected-secrets-94a30d08-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.930324ms
Sep  6 21:47:39.828: INFO: Pod "pod-projected-secrets-94a30d08-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007529296s
Sep  6 21:47:41.831: INFO: Pod "pod-projected-secrets-94a30d08-f08a-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010932436s
STEP: Saw pod success
Sep  6 21:47:41.831: INFO: Pod "pod-projected-secrets-94a30d08-f08a-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:47:41.833: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-94a30d08-f08a-11ea-b72c-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Sep  6 21:47:42.003: INFO: Waiting for pod pod-projected-secrets-94a30d08-f08a-11ea-b72c-0242ac110008 to disappear
Sep  6 21:47:42.010: INFO: Pod pod-projected-secrets-94a30d08-f08a-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:47:42.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hs8p2" for this suite.
Sep  6 21:47:48.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:47:48.040: INFO: namespace: e2e-tests-projected-hs8p2, resource: bindings, ignored listing per whitelist
Sep  6 21:47:48.099: INFO: namespace e2e-tests-projected-hs8p2 deletion completed in 6.086319305s

• [SLOW TEST:10.410 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:47:48.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-9ad2258a-f08a-11ea-b72c-0242ac110008
Sep  6 21:47:48.223: INFO: Pod name my-hostname-basic-9ad2258a-f08a-11ea-b72c-0242ac110008: Found 0 pods out of 1
Sep  6 21:47:53.227: INFO: Pod name my-hostname-basic-9ad2258a-f08a-11ea-b72c-0242ac110008: Found 1 pods out of 1
Sep  6 21:47:53.227: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-9ad2258a-f08a-11ea-b72c-0242ac110008" are running
Sep  6 21:47:53.230: INFO: Pod "my-hostname-basic-9ad2258a-f08a-11ea-b72c-0242ac110008-g67fn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-06 21:47:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-06 21:47:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-06 21:47:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-06 21:47:48 +0000 UTC Reason: Message:}])
Sep  6 21:47:53.230: INFO: Trying to dial the pod
Sep  6 21:47:58.242: INFO: Controller my-hostname-basic-9ad2258a-f08a-11ea-b72c-0242ac110008: Got expected result from replica 1 [my-hostname-basic-9ad2258a-f08a-11ea-b72c-0242ac110008-g67fn]: "my-hostname-basic-9ad2258a-f08a-11ea-b72c-0242ac110008-g67fn", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:47:58.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-b2skm" for this suite.
Sep  6 21:48:04.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:48:04.271: INFO: namespace: e2e-tests-replication-controller-b2skm, resource: bindings, ignored listing per whitelist
Sep  6 21:48:04.335: INFO: namespace e2e-tests-replication-controller-b2skm deletion completed in 6.089938466s

• [SLOW TEST:16.236 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:48:04.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  6 21:48:04.471: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a48156d5-f08a-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-nj7zk" to be "success or failure"
Sep  6 21:48:04.478: INFO: Pod "downwardapi-volume-a48156d5-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.664257ms
Sep  6 21:48:06.482: INFO: Pod "downwardapi-volume-a48156d5-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011106093s
Sep  6 21:48:08.486: INFO: Pod "downwardapi-volume-a48156d5-f08a-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015262879s
STEP: Saw pod success
Sep  6 21:48:08.486: INFO: Pod "downwardapi-volume-a48156d5-f08a-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:48:08.490: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a48156d5-f08a-11ea-b72c-0242ac110008 container client-container: 
STEP: delete the pod
Sep  6 21:48:08.509: INFO: Waiting for pod downwardapi-volume-a48156d5-f08a-11ea-b72c-0242ac110008 to disappear
Sep  6 21:48:08.622: INFO: Pod downwardapi-volume-a48156d5-f08a-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:48:08.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nj7zk" for this suite.
Sep  6 21:48:14.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:48:14.759: INFO: namespace: e2e-tests-projected-nj7zk, resource: bindings, ignored listing per whitelist
Sep  6 21:48:14.796: INFO: namespace e2e-tests-projected-nj7zk deletion completed in 6.169568606s

• [SLOW TEST:10.460 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:48:14.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  6 21:48:14.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:48:19.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pb2cb" for this suite.
Sep  6 21:48:57.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:48:57.129: INFO: namespace: e2e-tests-pods-pb2cb, resource: bindings, ignored listing per whitelist
Sep  6 21:48:57.194: INFO: namespace e2e-tests-pods-pb2cb deletion completed in 38.108267074s

• [SLOW TEST:42.398 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:48:57.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-c40488e4-f08a-11ea-b72c-0242ac110008
STEP: Creating configMap with name cm-test-opt-upd-c4048935-f08a-11ea-b72c-0242ac110008
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-c40488e4-f08a-11ea-b72c-0242ac110008
STEP: Updating configmap cm-test-opt-upd-c4048935-f08a-11ea-b72c-0242ac110008
STEP: Creating configMap with name cm-test-opt-create-c404894d-f08a-11ea-b72c-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:49:05.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vvzg6" for this suite.
Sep  6 21:49:27.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:49:27.453: INFO: namespace: e2e-tests-configmap-vvzg6, resource: bindings, ignored listing per whitelist
Sep  6 21:49:27.517: INFO: namespace e2e-tests-configmap-vvzg6 deletion completed in 22.093131208s

• [SLOW TEST:30.323 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:49:27.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep  6 21:49:27.619: INFO: Waiting up to 5m0s for pod "pod-d615cda5-f08a-11ea-b72c-0242ac110008" in namespace "e2e-tests-emptydir-sbwzl" to be "success or failure"
Sep  6 21:49:27.631: INFO: Pod "pod-d615cda5-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.913058ms
Sep  6 21:49:29.635: INFO: Pod "pod-d615cda5-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016017203s
Sep  6 21:49:31.639: INFO: Pod "pod-d615cda5-f08a-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020173504s
STEP: Saw pod success
Sep  6 21:49:31.639: INFO: Pod "pod-d615cda5-f08a-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:49:31.642: INFO: Trying to get logs from node hunter-worker2 pod pod-d615cda5-f08a-11ea-b72c-0242ac110008 container test-container: 
STEP: delete the pod
Sep  6 21:49:31.866: INFO: Waiting for pod pod-d615cda5-f08a-11ea-b72c-0242ac110008 to disappear
Sep  6 21:49:31.936: INFO: Pod pod-d615cda5-f08a-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:49:31.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sbwzl" for this suite.
Sep  6 21:49:37.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:49:38.004: INFO: namespace: e2e-tests-emptydir-sbwzl, resource: bindings, ignored listing per whitelist
Sep  6 21:49:38.053: INFO: namespace e2e-tests-emptydir-sbwzl deletion completed in 6.112192488s

• [SLOW TEST:10.536 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:49:38.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Sep  6 21:49:38.160: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:49:43.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-4pr9r" for this suite.
Sep  6 21:49:49.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:49:49.658: INFO: namespace: e2e-tests-init-container-4pr9r, resource: bindings, ignored listing per whitelist
Sep  6 21:49:49.667: INFO: namespace e2e-tests-init-container-4pr9r deletion completed in 6.111778314s

• [SLOW TEST:11.614 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:49:49.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Sep  6 21:49:49.811: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:49:49.813: INFO: Number of nodes with available pods: 0
Sep  6 21:49:49.814: INFO: Node hunter-worker is running more than one daemon pod
Sep  6 21:49:50.818: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:49:50.821: INFO: Number of nodes with available pods: 0
Sep  6 21:49:50.821: INFO: Node hunter-worker is running more than one daemon pod
Sep  6 21:49:51.841: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:49:51.844: INFO: Number of nodes with available pods: 0
Sep  6 21:49:51.844: INFO: Node hunter-worker is running more than one daemon pod
Sep  6 21:49:52.817: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:49:52.820: INFO: Number of nodes with available pods: 0
Sep  6 21:49:52.820: INFO: Node hunter-worker is running more than one daemon pod
Sep  6 21:49:53.839: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:49:53.842: INFO: Number of nodes with available pods: 2
Sep  6 21:49:53.842: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Sep  6 21:49:53.865: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:49:53.868: INFO: Number of nodes with available pods: 1
Sep  6 21:49:53.868: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  6 21:49:54.872: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:49:54.875: INFO: Number of nodes with available pods: 1
Sep  6 21:49:54.875: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  6 21:49:55.872: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:49:55.876: INFO: Number of nodes with available pods: 1
Sep  6 21:49:55.876: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  6 21:49:56.873: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:49:56.877: INFO: Number of nodes with available pods: 1
Sep  6 21:49:56.877: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  6 21:49:57.873: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:49:57.881: INFO: Number of nodes with available pods: 1
Sep  6 21:49:57.881: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  6 21:49:58.873: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:49:58.877: INFO: Number of nodes with available pods: 1
Sep  6 21:49:58.877: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  6 21:49:59.872: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:49:59.876: INFO: Number of nodes with available pods: 1
Sep  6 21:49:59.876: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  6 21:50:00.873: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:50:00.877: INFO: Number of nodes with available pods: 1
Sep  6 21:50:00.877: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  6 21:50:02.026: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:50:02.046: INFO: Number of nodes with available pods: 1
Sep  6 21:50:02.046: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  6 21:50:02.873: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:50:02.876: INFO: Number of nodes with available pods: 1
Sep  6 21:50:02.876: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  6 21:50:03.873: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  6 21:50:03.876: INFO: Number of nodes with available pods: 2
Sep  6 21:50:03.876: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-b4ts2, will wait for the garbage collector to delete the pods
Sep  6 21:50:03.937: INFO: Deleting DaemonSet.extensions daemon-set took: 6.401112ms
Sep  6 21:50:04.037: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.280228ms
Sep  6 21:50:10.141: INFO: Number of nodes with available pods: 0
Sep  6 21:50:10.141: INFO: Number of running nodes: 0, number of available pods: 0
Sep  6 21:50:10.143: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-b4ts2/daemonsets","resourceVersion":"228230"},"items":null}

Sep  6 21:50:10.146: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-b4ts2/pods","resourceVersion":"228230"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:50:10.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-b4ts2" for this suite.
Sep  6 21:50:16.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:50:16.208: INFO: namespace: e2e-tests-daemonsets-b4ts2, resource: bindings, ignored listing per whitelist
Sep  6 21:50:16.278: INFO: namespace e2e-tests-daemonsets-b4ts2 deletion completed in 6.10418483s

• [SLOW TEST:26.611 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:50:16.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-f325d171-f08a-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume configMaps
Sep  6 21:50:16.411: INFO: Waiting up to 5m0s for pod "pod-configmaps-f32942c3-f08a-11ea-b72c-0242ac110008" in namespace "e2e-tests-configmap-9npxk" to be "success or failure"
Sep  6 21:50:16.414: INFO: Pod "pod-configmaps-f32942c3-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.55215ms
Sep  6 21:50:18.432: INFO: Pod "pod-configmaps-f32942c3-f08a-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021615736s
Sep  6 21:50:20.436: INFO: Pod "pod-configmaps-f32942c3-f08a-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025837945s
STEP: Saw pod success
Sep  6 21:50:20.436: INFO: Pod "pod-configmaps-f32942c3-f08a-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:50:20.439: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-f32942c3-f08a-11ea-b72c-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Sep  6 21:50:20.476: INFO: Waiting for pod pod-configmaps-f32942c3-f08a-11ea-b72c-0242ac110008 to disappear
Sep  6 21:50:20.511: INFO: Pod pod-configmaps-f32942c3-f08a-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:50:20.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9npxk" for this suite.
Sep  6 21:50:26.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:50:26.582: INFO: namespace: e2e-tests-configmap-9npxk, resource: bindings, ignored listing per whitelist
Sep  6 21:50:26.606: INFO: namespace e2e-tests-configmap-9npxk deletion completed in 6.092187164s

• [SLOW TEST:10.328 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:50:26.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Sep  6 21:50:26.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-bzlgs run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Sep  6 21:50:31.925: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0906 21:50:31.848990    2659 log.go:172] (0xc00014c6e0) (0xc0005dbe00) Create stream\nI0906 21:50:31.849015    2659 log.go:172] (0xc00014c6e0) (0xc0005dbe00) Stream added, broadcasting: 1\nI0906 21:50:31.851418    2659 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0906 21:50:31.851473    2659 log.go:172] (0xc00014c6e0) (0xc0005dbea0) Create stream\nI0906 21:50:31.851495    2659 log.go:172] (0xc00014c6e0) (0xc0005dbea0) Stream added, broadcasting: 3\nI0906 21:50:31.853058    2659 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0906 21:50:31.853123    2659 log.go:172] (0xc00014c6e0) (0xc000a80140) Create stream\nI0906 21:50:31.853145    2659 log.go:172] (0xc00014c6e0) (0xc000a80140) Stream added, broadcasting: 5\nI0906 21:50:31.854229    2659 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0906 21:50:31.854291    2659 log.go:172] (0xc00014c6e0) (0xc00076e000) Create stream\nI0906 21:50:31.854320    2659 log.go:172] (0xc00014c6e0) (0xc00076e000) Stream added, broadcasting: 7\nI0906 21:50:31.855282    2659 log.go:172] (0xc00014c6e0) Reply frame received for 7\nI0906 21:50:31.855502    2659 log.go:172] (0xc0005dbea0) (3) Writing data frame\nI0906 21:50:31.855604    2659 log.go:172] (0xc0005dbea0) (3) Writing data frame\nI0906 21:50:31.856606    2659 log.go:172] (0xc00014c6e0) Data frame received for 5\nI0906 21:50:31.856638    2659 log.go:172] (0xc000a80140) (5) Data frame handling\nI0906 21:50:31.856664    2659 log.go:172] (0xc000a80140) (5) Data frame sent\nI0906 21:50:31.857445    2659 log.go:172] (0xc00014c6e0) Data frame received for 5\nI0906 21:50:31.857465    2659 log.go:172] (0xc000a80140) (5) Data frame handling\nI0906 21:50:31.857482    2659 log.go:172] (0xc000a80140) (5) Data frame sent\nI0906 21:50:31.906065    2659 log.go:172] (0xc00014c6e0) Data frame received for 5\nI0906 21:50:31.906116    2659 log.go:172] (0xc000a80140) (5) Data frame handling\nI0906 21:50:31.906146    2659 log.go:172] (0xc00014c6e0) Data frame received for 7\nI0906 21:50:31.906165    2659 log.go:172] (0xc00076e000) (7) Data frame handling\nI0906 21:50:31.906322    2659 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0906 21:50:31.906351    2659 log.go:172] (0xc0005dbe00) (1) Data frame handling\nI0906 21:50:31.906366    2659 log.go:172] (0xc0005dbe00) (1) Data frame sent\nI0906 21:50:31.906384    2659 log.go:172] (0xc00014c6e0) (0xc0005dbe00) Stream removed, broadcasting: 1\nI0906 21:50:31.906506    2659 log.go:172] (0xc00014c6e0) (0xc0005dbe00) Stream removed, broadcasting: 1\nI0906 21:50:31.906552    2659 log.go:172] (0xc00014c6e0) (0xc0005dbea0) Stream removed, broadcasting: 3\nI0906 21:50:31.906573    2659 log.go:172] (0xc00014c6e0) (0xc000a80140) Stream removed, broadcasting: 5\nI0906 21:50:31.906650    2659 log.go:172] (0xc00014c6e0) Go away received\nI0906 21:50:31.906774    2659 log.go:172] (0xc00014c6e0) (0xc00076e000) Stream removed, broadcasting: 7\n"
Sep  6 21:50:31.926: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:50:33.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bzlgs" for this suite.
Sep  6 21:50:39.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:50:40.017: INFO: namespace: e2e-tests-kubectl-bzlgs, resource: bindings, ignored listing per whitelist
Sep  6 21:50:40.023: INFO: namespace e2e-tests-kubectl-bzlgs deletion completed in 6.087709496s

• [SLOW TEST:13.416 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:50:40.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  6 21:50:40.157: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01524dc8-f08b-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-d8sld" to be "success or failure"
Sep  6 21:50:40.177: INFO: Pod "downwardapi-volume-01524dc8-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 20.406752ms
Sep  6 21:50:42.182: INFO: Pod "downwardapi-volume-01524dc8-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024805266s
Sep  6 21:50:44.185: INFO: Pod "downwardapi-volume-01524dc8-f08b-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028334744s
STEP: Saw pod success
Sep  6 21:50:44.185: INFO: Pod "downwardapi-volume-01524dc8-f08b-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:50:44.187: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-01524dc8-f08b-11ea-b72c-0242ac110008 container client-container: 
STEP: delete the pod
Sep  6 21:50:44.200: INFO: Waiting for pod downwardapi-volume-01524dc8-f08b-11ea-b72c-0242ac110008 to disappear
Sep  6 21:50:44.205: INFO: Pod downwardapi-volume-01524dc8-f08b-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:50:44.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-d8sld" for this suite.
Sep  6 21:50:50.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:50:50.343: INFO: namespace: e2e-tests-downward-api-d8sld, resource: bindings, ignored listing per whitelist
Sep  6 21:50:50.363: INFO: namespace e2e-tests-downward-api-d8sld deletion completed in 6.156235375s

• [SLOW TEST:10.340 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:50:50.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-077bd8b4-f08b-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume secrets
Sep  6 21:50:50.524: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-077f1fff-f08b-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-wzr2k" to be "success or failure"
Sep  6 21:50:50.553: INFO: Pod "pod-projected-secrets-077f1fff-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 28.264636ms
Sep  6 21:50:52.557: INFO: Pod "pod-projected-secrets-077f1fff-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032705531s
Sep  6 21:50:54.562: INFO: Pod "pod-projected-secrets-077f1fff-f08b-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037853672s
STEP: Saw pod success
Sep  6 21:50:54.562: INFO: Pod "pod-projected-secrets-077f1fff-f08b-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:50:54.567: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-077f1fff-f08b-11ea-b72c-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Sep  6 21:50:54.602: INFO: Waiting for pod pod-projected-secrets-077f1fff-f08b-11ea-b72c-0242ac110008 to disappear
Sep  6 21:50:54.666: INFO: Pod pod-projected-secrets-077f1fff-f08b-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:50:54.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wzr2k" for this suite.
Sep  6 21:51:00.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:51:00.720: INFO: namespace: e2e-tests-projected-wzr2k, resource: bindings, ignored listing per whitelist
Sep  6 21:51:00.759: INFO: namespace e2e-tests-projected-wzr2k deletion completed in 6.088760085s

• [SLOW TEST:10.396 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:51:00.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Sep  6 21:51:00.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Sep  6 21:51:01.036: INFO: stderr: ""
Sep  6 21:51:01.036: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:51:01.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ksmk8" for this suite.
Sep  6 21:51:07.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:51:07.271: INFO: namespace: e2e-tests-kubectl-ksmk8, resource: bindings, ignored listing per whitelist
Sep  6 21:51:07.312: INFO: namespace e2e-tests-kubectl-ksmk8 deletion completed in 6.271679264s

• [SLOW TEST:6.553 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:51:07.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep  6 21:51:07.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-dzntq'
Sep  6 21:51:07.515: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep  6 21:51:07.515: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Sep  6 21:51:11.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-dzntq'
Sep  6 21:51:11.653: INFO: stderr: ""
Sep  6 21:51:11.653: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:51:11.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dzntq" for this suite.
Sep  6 21:51:17.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:51:17.720: INFO: namespace: e2e-tests-kubectl-dzntq, resource: bindings, ignored listing per whitelist
Sep  6 21:51:17.779: INFO: namespace e2e-tests-kubectl-dzntq deletion completed in 6.123754661s

• [SLOW TEST:10.466 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:51:17.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-p9xtc
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Sep  6 21:51:17.921: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Sep  6 21:51:42.111: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.149 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-p9xtc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  6 21:51:42.111: INFO: >>> kubeConfig: /root/.kube/config
I0906 21:51:42.165865       7 log.go:172] (0xc001adc2c0) (0xc002648be0) Create stream
I0906 21:51:42.165900       7 log.go:172] (0xc001adc2c0) (0xc002648be0) Stream added, broadcasting: 1
I0906 21:51:42.169506       7 log.go:172] (0xc001adc2c0) Reply frame received for 1
I0906 21:51:42.169562       7 log.go:172] (0xc001adc2c0) (0xc000c455e0) Create stream
I0906 21:51:42.169581       7 log.go:172] (0xc001adc2c0) (0xc000c455e0) Stream added, broadcasting: 3
I0906 21:51:42.170449       7 log.go:172] (0xc001adc2c0) Reply frame received for 3
I0906 21:51:42.170493       7 log.go:172] (0xc001adc2c0) (0xc001a68640) Create stream
I0906 21:51:42.170512       7 log.go:172] (0xc001adc2c0) (0xc001a68640) Stream added, broadcasting: 5
I0906 21:51:42.171435       7 log.go:172] (0xc001adc2c0) Reply frame received for 5
I0906 21:51:43.236702       7 log.go:172] (0xc001adc2c0) Data frame received for 3
I0906 21:51:43.236746       7 log.go:172] (0xc000c455e0) (3) Data frame handling
I0906 21:51:43.236782       7 log.go:172] (0xc000c455e0) (3) Data frame sent
I0906 21:51:43.236852       7 log.go:172] (0xc001adc2c0) Data frame received for 3
I0906 21:51:43.236886       7 log.go:172] (0xc000c455e0) (3) Data frame handling
I0906 21:51:43.237087       7 log.go:172] (0xc001adc2c0) Data frame received for 5
I0906 21:51:43.237122       7 log.go:172] (0xc001a68640) (5) Data frame handling
I0906 21:51:43.239112       7 log.go:172] (0xc001adc2c0) Data frame received for 1
I0906 21:51:43.239156       7 log.go:172] (0xc002648be0) (1) Data frame handling
I0906 21:51:43.239199       7 log.go:172] (0xc002648be0) (1) Data frame sent
I0906 21:51:43.239224       7 log.go:172] (0xc001adc2c0) (0xc002648be0) Stream removed, broadcasting: 1
I0906 21:51:43.239242       7 log.go:172] (0xc001adc2c0) Go away received
I0906 21:51:43.239396       7 log.go:172] (0xc001adc2c0) (0xc002648be0) Stream removed, broadcasting: 1
I0906 21:51:43.239429       7 log.go:172] (0xc001adc2c0) (0xc000c455e0) Stream removed, broadcasting: 3
I0906 21:51:43.239451       7 log.go:172] (0xc001adc2c0) (0xc001a68640) Stream removed, broadcasting: 5
Sep  6 21:51:43.239: INFO: Found all expected endpoints: [netserver-0]
Sep  6 21:51:43.243: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.156 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-p9xtc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  6 21:51:43.243: INFO: >>> kubeConfig: /root/.kube/config
I0906 21:51:43.280443       7 log.go:172] (0xc00187e2c0) (0xc001a68820) Create stream
I0906 21:51:43.280475       7 log.go:172] (0xc00187e2c0) (0xc001a68820) Stream added, broadcasting: 1
I0906 21:51:43.284339       7 log.go:172] (0xc00187e2c0) Reply frame received for 1
I0906 21:51:43.284397       7 log.go:172] (0xc00187e2c0) (0xc0021ffd60) Create stream
I0906 21:51:43.284423       7 log.go:172] (0xc00187e2c0) (0xc0021ffd60) Stream added, broadcasting: 3
I0906 21:51:43.285404       7 log.go:172] (0xc00187e2c0) Reply frame received for 3
I0906 21:51:43.285461       7 log.go:172] (0xc00187e2c0) (0xc002648dc0) Create stream
I0906 21:51:43.285475       7 log.go:172] (0xc00187e2c0) (0xc002648dc0) Stream added, broadcasting: 5
I0906 21:51:43.286325       7 log.go:172] (0xc00187e2c0) Reply frame received for 5
I0906 21:51:44.347898       7 log.go:172] (0xc00187e2c0) Data frame received for 3
I0906 21:51:44.347948       7 log.go:172] (0xc0021ffd60) (3) Data frame handling
I0906 21:51:44.347973       7 log.go:172] (0xc0021ffd60) (3) Data frame sent
I0906 21:51:44.348146       7 log.go:172] (0xc00187e2c0) Data frame received for 3
I0906 21:51:44.348181       7 log.go:172] (0xc0021ffd60) (3) Data frame handling
I0906 21:51:44.348298       7 log.go:172] (0xc00187e2c0) Data frame received for 5
I0906 21:51:44.348320       7 log.go:172] (0xc002648dc0) (5) Data frame handling
I0906 21:51:44.350293       7 log.go:172] (0xc00187e2c0) Data frame received for 1
I0906 21:51:44.350306       7 log.go:172] (0xc001a68820) (1) Data frame handling
I0906 21:51:44.350312       7 log.go:172] (0xc001a68820) (1) Data frame sent
I0906 21:51:44.350319       7 log.go:172] (0xc00187e2c0) (0xc001a68820) Stream removed, broadcasting: 1
I0906 21:51:44.350411       7 log.go:172] (0xc00187e2c0) (0xc001a68820) Stream removed, broadcasting: 1
I0906 21:51:44.350431       7 log.go:172] (0xc00187e2c0) (0xc0021ffd60) Stream removed, broadcasting: 3
I0906 21:51:44.350438       7 log.go:172] (0xc00187e2c0) (0xc002648dc0) Stream removed, broadcasting: 5
Sep  6 21:51:44.350: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
I0906 21:51:44.350514       7 log.go:172] (0xc00187e2c0) Go away received
Sep  6 21:51:44.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-p9xtc" for this suite.
Sep  6 21:52:08.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:52:08.448: INFO: namespace: e2e-tests-pod-network-test-p9xtc, resource: bindings, ignored listing per whitelist
Sep  6 21:52:08.472: INFO: namespace e2e-tests-pod-network-test-p9xtc deletion completed in 24.118391761s

• [SLOW TEST:50.692 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:52:08.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  6 21:52:08.691: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"360e95ce-f08b-11ea-b060-0242ac120006", Controller:(*bool)(0xc0017a6232), BlockOwnerDeletion:(*bool)(0xc0017a6233)}}
Sep  6 21:52:08.708: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"36098710-f08b-11ea-b060-0242ac120006", Controller:(*bool)(0xc001efb4b2), BlockOwnerDeletion:(*bool)(0xc001efb4b3)}}
Sep  6 21:52:08.777: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"360a0f93-f08b-11ea-b060-0242ac120006", Controller:(*bool)(0xc0017a6472), BlockOwnerDeletion:(*bool)(0xc0017a6473)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:52:13.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-fbmzt" for this suite.
Sep  6 21:52:19.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:52:19.903: INFO: namespace: e2e-tests-gc-fbmzt, resource: bindings, ignored listing per whitelist
Sep  6 21:52:19.940: INFO: namespace e2e-tests-gc-fbmzt deletion completed in 6.12956573s

• [SLOW TEST:11.468 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:52:19.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0906 21:53:00.591611       7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep  6 21:53:00.591: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:53:00.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-wd9fs" for this suite.
Sep  6 21:53:08.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:53:08.624: INFO: namespace: e2e-tests-gc-wd9fs, resource: bindings, ignored listing per whitelist
Sep  6 21:53:08.675: INFO: namespace e2e-tests-gc-wd9fs deletion completed in 8.080856868s

• [SLOW TEST:48.734 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:53:08.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:53:12.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-5lvnn" for this suite.
Sep  6 21:53:50.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:53:51.055: INFO: namespace: e2e-tests-kubelet-test-5lvnn, resource: bindings, ignored listing per whitelist
Sep  6 21:53:51.058: INFO: namespace e2e-tests-kubelet-test-5lvnn deletion completed in 38.095380798s

• [SLOW TEST:42.382 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:53:51.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-73334cdc-f08b-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume configMaps
Sep  6 21:53:51.229: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-73344b9c-f08b-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-8m5wd" to be "success or failure"
Sep  6 21:53:51.245: INFO: Pod "pod-projected-configmaps-73344b9c-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.546526ms
Sep  6 21:53:53.292: INFO: Pod "pod-projected-configmaps-73344b9c-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063174797s
Sep  6 21:53:55.296: INFO: Pod "pod-projected-configmaps-73344b9c-f08b-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067401339s
STEP: Saw pod success
Sep  6 21:53:55.296: INFO: Pod "pod-projected-configmaps-73344b9c-f08b-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:53:55.299: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-73344b9c-f08b-11ea-b72c-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  6 21:53:55.401: INFO: Waiting for pod pod-projected-configmaps-73344b9c-f08b-11ea-b72c-0242ac110008 to disappear
Sep  6 21:53:55.435: INFO: Pod pod-projected-configmaps-73344b9c-f08b-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:53:55.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8m5wd" for this suite.
Sep  6 21:54:01.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:54:01.596: INFO: namespace: e2e-tests-projected-8m5wd, resource: bindings, ignored listing per whitelist
Sep  6 21:54:01.603: INFO: namespace e2e-tests-projected-8m5wd deletion completed in 6.164988052s

• [SLOW TEST:10.545 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:54:01.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  6 21:54:01.881: INFO: Waiting up to 5m0s for pod "downwardapi-volume-797ff1cc-f08b-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-b9qm4" to be "success or failure"
Sep  6 21:54:01.887: INFO: Pod "downwardapi-volume-797ff1cc-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.260963ms
Sep  6 21:54:03.890: INFO: Pod "downwardapi-volume-797ff1cc-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008925241s
Sep  6 21:54:05.894: INFO: Pod "downwardapi-volume-797ff1cc-f08b-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012795774s
STEP: Saw pod success
Sep  6 21:54:05.894: INFO: Pod "downwardapi-volume-797ff1cc-f08b-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:54:05.914: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-797ff1cc-f08b-11ea-b72c-0242ac110008 container client-container: 
STEP: delete the pod
Sep  6 21:54:05.982: INFO: Waiting for pod downwardapi-volume-797ff1cc-f08b-11ea-b72c-0242ac110008 to disappear
Sep  6 21:54:05.992: INFO: Pod downwardapi-volume-797ff1cc-f08b-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:54:05.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b9qm4" for this suite.
Sep  6 21:54:12.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:54:12.074: INFO: namespace: e2e-tests-projected-b9qm4, resource: bindings, ignored listing per whitelist
Sep  6 21:54:12.106: INFO: namespace e2e-tests-projected-b9qm4 deletion completed in 6.110254106s

• [SLOW TEST:10.502 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:54:12.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-7fb66c07-f08b-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume configMaps
Sep  6 21:54:12.230: INFO: Waiting up to 5m0s for pod "pod-configmaps-7fb89e92-f08b-11ea-b72c-0242ac110008" in namespace "e2e-tests-configmap-7xv7n" to be "success or failure"
Sep  6 21:54:12.280: INFO: Pod "pod-configmaps-7fb89e92-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 50.058484ms
Sep  6 21:54:14.284: INFO: Pod "pod-configmaps-7fb89e92-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054070084s
Sep  6 21:54:16.289: INFO: Pod "pod-configmaps-7fb89e92-f08b-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058395243s
STEP: Saw pod success
Sep  6 21:54:16.289: INFO: Pod "pod-configmaps-7fb89e92-f08b-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:54:16.292: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-7fb89e92-f08b-11ea-b72c-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Sep  6 21:54:16.312: INFO: Waiting for pod pod-configmaps-7fb89e92-f08b-11ea-b72c-0242ac110008 to disappear
Sep  6 21:54:16.329: INFO: Pod pod-configmaps-7fb89e92-f08b-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:54:16.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7xv7n" for this suite.
Sep  6 21:54:22.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:54:22.366: INFO: namespace: e2e-tests-configmap-7xv7n, resource: bindings, ignored listing per whitelist
Sep  6 21:54:22.420: INFO: namespace e2e-tests-configmap-7xv7n deletion completed in 6.086904267s

• [SLOW TEST:10.314 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:54:22.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep  6 21:54:22.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-f5hq7'
Sep  6 21:54:22.605: INFO: stderr: ""
Sep  6 21:54:22.605: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Sep  6 21:54:22.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-f5hq7'
Sep  6 21:54:30.071: INFO: stderr: ""
Sep  6 21:54:30.071: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:54:30.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-f5hq7" for this suite.
Sep  6 21:54:36.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:54:36.206: INFO: namespace: e2e-tests-kubectl-f5hq7, resource: bindings, ignored listing per whitelist
Sep  6 21:54:36.211: INFO: namespace e2e-tests-kubectl-f5hq7 deletion completed in 6.136209049s

• [SLOW TEST:13.790 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:54:36.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-8e156c27-f08b-11ea-b72c-0242ac110008
STEP: Creating secret with name secret-projected-all-test-volume-8e156bfd-f08b-11ea-b72c-0242ac110008
STEP: Creating a pod to test Check all projections for projected volume plugin
Sep  6 21:54:36.342: INFO: Waiting up to 5m0s for pod "projected-volume-8e156b7d-f08b-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-rcdh4" to be "success or failure"
Sep  6 21:54:36.364: INFO: Pod "projected-volume-8e156b7d-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.713759ms
Sep  6 21:54:38.367: INFO: Pod "projected-volume-8e156b7d-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025306328s
Sep  6 21:54:40.371: INFO: Pod "projected-volume-8e156b7d-f08b-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029439383s
STEP: Saw pod success
Sep  6 21:54:40.371: INFO: Pod "projected-volume-8e156b7d-f08b-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:54:40.374: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-8e156b7d-f08b-11ea-b72c-0242ac110008 container projected-all-volume-test: 
STEP: delete the pod
Sep  6 21:54:40.396: INFO: Waiting for pod projected-volume-8e156b7d-f08b-11ea-b72c-0242ac110008 to disappear
Sep  6 21:54:40.400: INFO: Pod projected-volume-8e156b7d-f08b-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:54:40.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rcdh4" for this suite.
Sep  6 21:54:46.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:54:46.474: INFO: namespace: e2e-tests-projected-rcdh4, resource: bindings, ignored listing per whitelist
Sep  6 21:54:46.508: INFO: namespace e2e-tests-projected-rcdh4 deletion completed in 6.104435176s

• [SLOW TEST:10.297 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:54:46.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Sep  6 21:54:46.617: INFO: Pod name pod-release: Found 0 pods out of 1
Sep  6 21:54:51.622: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:54:52.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-qwnlr" for this suite.
Sep  6 21:54:58.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:54:58.834: INFO: namespace: e2e-tests-replication-controller-qwnlr, resource: bindings, ignored listing per whitelist
Sep  6 21:54:58.838: INFO: namespace e2e-tests-replication-controller-qwnlr deletion completed in 6.194284226s

• [SLOW TEST:12.330 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:54:58.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-ltvwb
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Sep  6 21:54:58.924: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Sep  6 21:55:25.121: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.168:8080/dial?request=hostName&protocol=udp&host=10.244.1.160&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-ltvwb PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  6 21:55:25.121: INFO: >>> kubeConfig: /root/.kube/config
I0906 21:55:25.157481       7 log.go:172] (0xc0021422c0) (0xc000194b40) Create stream
I0906 21:55:25.157516       7 log.go:172] (0xc0021422c0) (0xc000194b40) Stream added, broadcasting: 1
I0906 21:55:25.161058       7 log.go:172] (0xc0021422c0) Reply frame received for 1
I0906 21:55:25.161104       7 log.go:172] (0xc0021422c0) (0xc001f6a140) Create stream
I0906 21:55:25.161115       7 log.go:172] (0xc0021422c0) (0xc001f6a140) Stream added, broadcasting: 3
I0906 21:55:25.162718       7 log.go:172] (0xc0021422c0) Reply frame received for 3
I0906 21:55:25.162760       7 log.go:172] (0xc0021422c0) (0xc0008bb5e0) Create stream
I0906 21:55:25.162775       7 log.go:172] (0xc0021422c0) (0xc0008bb5e0) Stream added, broadcasting: 5
I0906 21:55:25.163725       7 log.go:172] (0xc0021422c0) Reply frame received for 5
I0906 21:55:25.237166       7 log.go:172] (0xc0021422c0) Data frame received for 3
I0906 21:55:25.237201       7 log.go:172] (0xc001f6a140) (3) Data frame handling
I0906 21:55:25.237229       7 log.go:172] (0xc001f6a140) (3) Data frame sent
I0906 21:55:25.238175       7 log.go:172] (0xc0021422c0) Data frame received for 3
I0906 21:55:25.238210       7 log.go:172] (0xc001f6a140) (3) Data frame handling
I0906 21:55:25.238242       7 log.go:172] (0xc0021422c0) Data frame received for 5
I0906 21:55:25.238255       7 log.go:172] (0xc0008bb5e0) (5) Data frame handling
I0906 21:55:25.240617       7 log.go:172] (0xc0021422c0) Data frame received for 1
I0906 21:55:25.240678       7 log.go:172] (0xc000194b40) (1) Data frame handling
I0906 21:55:25.240732       7 log.go:172] (0xc000194b40) (1) Data frame sent
I0906 21:55:25.240765       7 log.go:172] (0xc0021422c0) (0xc000194b40) Stream removed, broadcasting: 1
I0906 21:55:25.240791       7 log.go:172] (0xc0021422c0) Go away received
I0906 21:55:25.240866       7 log.go:172] (0xc0021422c0) (0xc000194b40) Stream removed, broadcasting: 1
I0906 21:55:25.240893       7 log.go:172] (0xc0021422c0) (0xc001f6a140) Stream removed, broadcasting: 3
I0906 21:55:25.240908       7 log.go:172] (0xc0021422c0) (0xc0008bb5e0) Stream removed, broadcasting: 5
Sep  6 21:55:25.240: INFO: Waiting for endpoints: map[]
Sep  6 21:55:25.244: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.168:8080/dial?request=hostName&protocol=udp&host=10.244.2.167&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-ltvwb PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  6 21:55:25.244: INFO: >>> kubeConfig: /root/.kube/config
I0906 21:55:25.273355       7 log.go:172] (0xc002142790) (0xc0003497c0) Create stream
I0906 21:55:25.273379       7 log.go:172] (0xc002142790) (0xc0003497c0) Stream added, broadcasting: 1
I0906 21:55:25.275319       7 log.go:172] (0xc002142790) Reply frame received for 1
I0906 21:55:25.275356       7 log.go:172] (0xc002142790) (0xc000349a40) Create stream
I0906 21:55:25.275370       7 log.go:172] (0xc002142790) (0xc000349a40) Stream added, broadcasting: 3
I0906 21:55:25.276353       7 log.go:172] (0xc002142790) Reply frame received for 3
I0906 21:55:25.276377       7 log.go:172] (0xc002142790) (0xc000349cc0) Create stream
I0906 21:55:25.276389       7 log.go:172] (0xc002142790) (0xc000349cc0) Stream added, broadcasting: 5
I0906 21:55:25.277216       7 log.go:172] (0xc002142790) Reply frame received for 5
I0906 21:55:25.338631       7 log.go:172] (0xc002142790) Data frame received for 3
I0906 21:55:25.338656       7 log.go:172] (0xc000349a40) (3) Data frame handling
I0906 21:55:25.338671       7 log.go:172] (0xc000349a40) (3) Data frame sent
I0906 21:55:25.339252       7 log.go:172] (0xc002142790) Data frame received for 5
I0906 21:55:25.339289       7 log.go:172] (0xc000349cc0) (5) Data frame handling
I0906 21:55:25.339324       7 log.go:172] (0xc002142790) Data frame received for 3
I0906 21:55:25.339341       7 log.go:172] (0xc000349a40) (3) Data frame handling
I0906 21:55:25.341018       7 log.go:172] (0xc002142790) Data frame received for 1
I0906 21:55:25.341047       7 log.go:172] (0xc0003497c0) (1) Data frame handling
I0906 21:55:25.341064       7 log.go:172] (0xc0003497c0) (1) Data frame sent
I0906 21:55:25.341115       7 log.go:172] (0xc002142790) (0xc0003497c0) Stream removed, broadcasting: 1
I0906 21:55:25.341156       7 log.go:172] (0xc002142790) Go away received
I0906 21:55:25.341313       7 log.go:172] (0xc002142790) (0xc0003497c0) Stream removed, broadcasting: 1
I0906 21:55:25.341344       7 log.go:172] (0xc002142790) (0xc000349a40) Stream removed, broadcasting: 3
I0906 21:55:25.341362       7 log.go:172] (0xc002142790) (0xc000349cc0) Stream removed, broadcasting: 5
Sep  6 21:55:25.341: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:55:25.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-ltvwb" for this suite.
Sep  6 21:55:47.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:55:47.420: INFO: namespace: e2e-tests-pod-network-test-ltvwb, resource: bindings, ignored listing per whitelist
Sep  6 21:55:47.433: INFO: namespace e2e-tests-pod-network-test-ltvwb deletion completed in 22.088020297s

• [SLOW TEST:48.595 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:55:47.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-b889ecba-f08b-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume configMaps
Sep  6 21:55:47.560: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b88bff51-f08b-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-xn44n" to be "success or failure"
Sep  6 21:55:47.617: INFO: Pod "pod-projected-configmaps-b88bff51-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 56.943226ms
Sep  6 21:55:49.621: INFO: Pod "pod-projected-configmaps-b88bff51-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061040179s
Sep  6 21:55:51.625: INFO: Pod "pod-projected-configmaps-b88bff51-f08b-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065054262s
STEP: Saw pod success
Sep  6 21:55:51.625: INFO: Pod "pod-projected-configmaps-b88bff51-f08b-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:55:51.628: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-b88bff51-f08b-11ea-b72c-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  6 21:55:51.643: INFO: Waiting for pod pod-projected-configmaps-b88bff51-f08b-11ea-b72c-0242ac110008 to disappear
Sep  6 21:55:51.675: INFO: Pod pod-projected-configmaps-b88bff51-f08b-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:55:51.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xn44n" for this suite.
Sep  6 21:55:57.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:55:57.740: INFO: namespace: e2e-tests-projected-xn44n, resource: bindings, ignored listing per whitelist
Sep  6 21:55:57.771: INFO: namespace e2e-tests-projected-xn44n deletion completed in 6.091815315s

• [SLOW TEST:10.338 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:55:57.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Sep  6 21:55:57.890: INFO: Waiting up to 5m0s for pod "var-expansion-beb263e4-f08b-11ea-b72c-0242ac110008" in namespace "e2e-tests-var-expansion-f5gwp" to be "success or failure"
Sep  6 21:55:57.894: INFO: Pod "var-expansion-beb263e4-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.489139ms
Sep  6 21:55:59.897: INFO: Pod "var-expansion-beb263e4-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006978176s
Sep  6 21:56:01.901: INFO: Pod "var-expansion-beb263e4-f08b-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010985662s
STEP: Saw pod success
Sep  6 21:56:01.901: INFO: Pod "var-expansion-beb263e4-f08b-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:56:01.904: INFO: Trying to get logs from node hunter-worker pod var-expansion-beb263e4-f08b-11ea-b72c-0242ac110008 container dapi-container: 
STEP: delete the pod
Sep  6 21:56:02.007: INFO: Waiting for pod var-expansion-beb263e4-f08b-11ea-b72c-0242ac110008 to disappear
Sep  6 21:56:02.055: INFO: Pod var-expansion-beb263e4-f08b-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:56:02.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-f5gwp" for this suite.
Sep  6 21:56:08.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:56:08.154: INFO: namespace: e2e-tests-var-expansion-f5gwp, resource: bindings, ignored listing per whitelist
Sep  6 21:56:08.154: INFO: namespace e2e-tests-var-expansion-f5gwp deletion completed in 6.095483725s

• [SLOW TEST:10.383 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:56:08.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  6 21:56:08.303: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4e8bf59-f08b-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-sm99m" to be "success or failure"
Sep  6 21:56:08.313: INFO: Pod "downwardapi-volume-c4e8bf59-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.327266ms
Sep  6 21:56:10.323: INFO: Pod "downwardapi-volume-c4e8bf59-f08b-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020021795s
Sep  6 21:56:12.326: INFO: Pod "downwardapi-volume-c4e8bf59-f08b-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023466548s
STEP: Saw pod success
Sep  6 21:56:12.326: INFO: Pod "downwardapi-volume-c4e8bf59-f08b-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 21:56:12.328: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c4e8bf59-f08b-11ea-b72c-0242ac110008 container client-container: 
STEP: delete the pod
Sep  6 21:56:12.378: INFO: Waiting for pod downwardapi-volume-c4e8bf59-f08b-11ea-b72c-0242ac110008 to disappear
Sep  6 21:56:12.403: INFO: Pod downwardapi-volume-c4e8bf59-f08b-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:56:12.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sm99m" for this suite.
Sep  6 21:56:18.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:56:18.506: INFO: namespace: e2e-tests-downward-api-sm99m, resource: bindings, ignored listing per whitelist
Sep  6 21:56:18.508: INFO: namespace e2e-tests-downward-api-sm99m deletion completed in 6.083937409s

• [SLOW TEST:10.354 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:56:18.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Sep  6 21:56:26.773: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep  6 21:56:26.776: INFO: Pod pod-with-prestop-http-hook still exists
Sep  6 21:56:28.776: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep  6 21:56:28.780: INFO: Pod pod-with-prestop-http-hook still exists
Sep  6 21:56:30.776: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep  6 21:56:30.780: INFO: Pod pod-with-prestop-http-hook still exists
Sep  6 21:56:32.776: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep  6 21:56:32.784: INFO: Pod pod-with-prestop-http-hook still exists
Sep  6 21:56:34.776: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep  6 21:56:34.780: INFO: Pod pod-with-prestop-http-hook still exists
Sep  6 21:56:36.776: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep  6 21:56:36.779: INFO: Pod pod-with-prestop-http-hook still exists
Sep  6 21:56:38.776: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep  6 21:56:38.797: INFO: Pod pod-with-prestop-http-hook still exists
Sep  6 21:56:40.776: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep  6 21:56:40.781: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:56:40.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-fp5lq" for this suite.
Sep  6 21:57:02.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:57:02.911: INFO: namespace: e2e-tests-container-lifecycle-hook-fp5lq, resource: bindings, ignored listing per whitelist
Sep  6 21:57:02.945: INFO: namespace e2e-tests-container-lifecycle-hook-fp5lq deletion completed in 22.155641396s

• [SLOW TEST:44.436 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:57:02.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-e5a214c1-f08b-11ea-b72c-0242ac110008
STEP: Creating secret with name s-test-opt-upd-e5a21543-f08b-11ea-b72c-0242ac110008
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-e5a214c1-f08b-11ea-b72c-0242ac110008
STEP: Updating secret s-test-opt-upd-e5a21543-f08b-11ea-b72c-0242ac110008
STEP: Creating secret with name s-test-opt-create-e5a21575-f08b-11ea-b72c-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:57:11.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lxdbl" for this suite.
Sep  6 21:57:33.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:57:33.609: INFO: namespace: e2e-tests-projected-lxdbl, resource: bindings, ignored listing per whitelist
Sep  6 21:57:33.625: INFO: namespace e2e-tests-projected-lxdbl deletion completed in 22.091309545s

• [SLOW TEST:30.680 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:57:33.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Sep  6 21:57:33.733: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:57:40.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-9xzbn" for this suite.
Sep  6 21:58:02.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:58:02.985: INFO: namespace: e2e-tests-init-container-9xzbn, resource: bindings, ignored listing per whitelist
Sep  6 21:58:03.028: INFO: namespace e2e-tests-init-container-9xzbn deletion completed in 22.107093221s

• [SLOW TEST:29.402 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:58:03.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:58:07.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-krj5d" for this suite.
Sep  6 21:58:53.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 21:58:53.240: INFO: namespace: e2e-tests-kubelet-test-krj5d, resource: bindings, ignored listing per whitelist
Sep  6 21:58:53.288: INFO: namespace e2e-tests-kubelet-test-krj5d deletion completed in 46.097388558s

• [SLOW TEST:50.259 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 21:58:53.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Sep  6 21:58:53.407: INFO: PodSpec: initContainers in spec.initContainers
Sep  6 21:59:42.970: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-275339a4-f08c-11ea-b72c-0242ac110008", GenerateName:"", Namespace:"e2e-tests-init-container-vlct2", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-vlct2/pods/pod-init-275339a4-f08c-11ea-b72c-0242ac110008", UID:"2753d96f-f08c-11ea-b060-0242ac120006", ResourceVersion:"230324", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63735026333, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"407486844"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-295t4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00208e700), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-295t4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-295t4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-295t4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001a7ce28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001bb4a80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a7ceb0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a7ced0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001a7ced8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001a7cedc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735026333, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735026333, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735026333, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735026333, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"10.244.2.173", StartTime:(*v1.Time)(0xc00167c0c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0020d1340)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0020d13b0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://642a431c6c66b93e16900498815fd8d209e3958cd9ba964da91440a36a14b114"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00167c160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00167c0e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 21:59:42.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-vlct2" for this suite.
Sep  6 22:00:05.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:00:05.134: INFO: namespace: e2e-tests-init-container-vlct2, resource: bindings, ignored listing per whitelist
Sep  6 22:00:05.148: INFO: namespace e2e-tests-init-container-vlct2 deletion completed in 22.102105169s

• [SLOW TEST:71.860 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:00:05.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-5221a646-f08c-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume secrets
Sep  6 22:00:05.287: INFO: Waiting up to 5m0s for pod "pod-secrets-5224cef4-f08c-11ea-b72c-0242ac110008" in namespace "e2e-tests-secrets-p4bgs" to be "success or failure"
Sep  6 22:00:05.294: INFO: Pod "pod-secrets-5224cef4-f08c-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.704271ms
Sep  6 22:00:07.299: INFO: Pod "pod-secrets-5224cef4-f08c-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012079081s
Sep  6 22:00:09.304: INFO: Pod "pod-secrets-5224cef4-f08c-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017764397s
STEP: Saw pod success
Sep  6 22:00:09.305: INFO: Pod "pod-secrets-5224cef4-f08c-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 22:00:09.308: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-5224cef4-f08c-11ea-b72c-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Sep  6 22:00:09.342: INFO: Waiting for pod pod-secrets-5224cef4-f08c-11ea-b72c-0242ac110008 to disappear
Sep  6 22:00:09.355: INFO: Pod pod-secrets-5224cef4-f08c-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:00:09.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-p4bgs" for this suite.
Sep  6 22:00:15.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:00:15.419: INFO: namespace: e2e-tests-secrets-p4bgs, resource: bindings, ignored listing per whitelist
Sep  6 22:00:15.458: INFO: namespace e2e-tests-secrets-p4bgs deletion completed in 6.100196399s

• [SLOW TEST:10.309 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:00:15.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Sep  6 22:00:15.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hvg26'
Sep  6 22:00:15.821: INFO: stderr: ""
Sep  6 22:00:15.821: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  6 22:00:15.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hvg26'
Sep  6 22:00:15.956: INFO: stderr: ""
Sep  6 22:00:15.956: INFO: stdout: "update-demo-nautilus-drm7d update-demo-nautilus-tv9kb "
Sep  6 22:00:15.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drm7d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvg26'
Sep  6 22:00:16.043: INFO: stderr: ""
Sep  6 22:00:16.043: INFO: stdout: ""
Sep  6 22:00:16.043: INFO: update-demo-nautilus-drm7d is created but not running
Sep  6 22:00:21.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hvg26'
Sep  6 22:00:21.139: INFO: stderr: ""
Sep  6 22:00:21.139: INFO: stdout: "update-demo-nautilus-drm7d update-demo-nautilus-tv9kb "
Sep  6 22:00:21.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drm7d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvg26'
Sep  6 22:00:21.229: INFO: stderr: ""
Sep  6 22:00:21.229: INFO: stdout: "true"
Sep  6 22:00:21.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drm7d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvg26'
Sep  6 22:00:21.325: INFO: stderr: ""
Sep  6 22:00:21.325: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  6 22:00:21.325: INFO: validating pod update-demo-nautilus-drm7d
Sep  6 22:00:21.330: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  6 22:00:21.330: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  6 22:00:21.330: INFO: update-demo-nautilus-drm7d is verified up and running
Sep  6 22:00:21.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tv9kb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvg26'
Sep  6 22:00:21.435: INFO: stderr: ""
Sep  6 22:00:21.435: INFO: stdout: "true"
Sep  6 22:00:21.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tv9kb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvg26'
Sep  6 22:00:21.533: INFO: stderr: ""
Sep  6 22:00:21.533: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  6 22:00:21.533: INFO: validating pod update-demo-nautilus-tv9kb
Sep  6 22:00:21.537: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  6 22:00:21.537: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  6 22:00:21.537: INFO: update-demo-nautilus-tv9kb is verified up and running
STEP: using delete to clean up resources
Sep  6 22:00:21.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hvg26'
Sep  6 22:00:21.632: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  6 22:00:21.632: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Sep  6 22:00:21.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-hvg26'
Sep  6 22:00:22.057: INFO: stderr: "No resources found.\n"
Sep  6 22:00:22.057: INFO: stdout: ""
Sep  6 22:00:22.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-hvg26 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep  6 22:00:22.142: INFO: stderr: ""
Sep  6 22:00:22.142: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:00:22.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hvg26" for this suite.
Sep  6 22:00:28.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:00:28.264: INFO: namespace: e2e-tests-kubectl-hvg26, resource: bindings, ignored listing per whitelist
Sep  6 22:00:28.283: INFO: namespace e2e-tests-kubectl-hvg26 deletion completed in 6.138459138s

• [SLOW TEST:12.825 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:00:28.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Sep  6 22:00:32.994: INFO: Successfully updated pod "labelsupdate5ff5d22c-f08c-11ea-b72c-0242ac110008"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:00:35.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t7gbf" for this suite.
Sep  6 22:00:57.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:00:57.171: INFO: namespace: e2e-tests-projected-t7gbf, resource: bindings, ignored listing per whitelist
Sep  6 22:00:57.171: INFO: namespace e2e-tests-projected-t7gbf deletion completed in 22.096979544s

• [SLOW TEST:28.887 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:00:57.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Sep  6 22:00:57.799: INFO: Waiting up to 5m0s for pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-5qbg5" in namespace "e2e-tests-svcaccounts-n729n" to be "success or failure"
Sep  6 22:00:57.849: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-5qbg5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.525ms
Sep  6 22:00:59.867: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-5qbg5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068227211s
Sep  6 22:01:01.903: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-5qbg5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104450473s
Sep  6 22:01:03.907: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-5qbg5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108575014s
STEP: Saw pod success
Sep  6 22:01:03.907: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-5qbg5" satisfied condition "success or failure"
Sep  6 22:01:03.910: INFO: Trying to get logs from node hunter-worker pod pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-5qbg5 container token-test: 
STEP: delete the pod
Sep  6 22:01:03.970: INFO: Waiting for pod pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-5qbg5 to disappear
Sep  6 22:01:03.978: INFO: Pod pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-5qbg5 no longer exists
STEP: Creating a pod to test consume service account root CA
Sep  6 22:01:03.981: INFO: Waiting up to 5m0s for pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-flgzn" in namespace "e2e-tests-svcaccounts-n729n" to be "success or failure"
Sep  6 22:01:03.984: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-flgzn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.22991ms
Sep  6 22:01:05.988: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-flgzn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007352011s
Sep  6 22:01:07.992: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-flgzn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011193883s
Sep  6 22:01:09.998: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-flgzn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016687612s
STEP: Saw pod success
Sep  6 22:01:09.998: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-flgzn" satisfied condition "success or failure"
Sep  6 22:01:10.001: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-flgzn container root-ca-test: 
STEP: delete the pod
Sep  6 22:01:10.079: INFO: Waiting for pod pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-flgzn to disappear
Sep  6 22:01:10.085: INFO: Pod pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-flgzn no longer exists
STEP: Creating a pod to test consume service account namespace
Sep  6 22:01:10.088: INFO: Waiting up to 5m0s for pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-4mkkj" in namespace "e2e-tests-svcaccounts-n729n" to be "success or failure"
Sep  6 22:01:10.097: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-4mkkj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.853297ms
Sep  6 22:01:12.101: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-4mkkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01281027s
Sep  6 22:01:14.104: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-4mkkj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016598613s
Sep  6 22:01:16.109: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-4mkkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02097756s
STEP: Saw pod success
Sep  6 22:01:16.109: INFO: Pod "pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-4mkkj" satisfied condition "success or failure"
Sep  6 22:01:16.111: INFO: Trying to get logs from node hunter-worker pod pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-4mkkj container namespace-test: 
STEP: delete the pod
Sep  6 22:01:16.180: INFO: Waiting for pod pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-4mkkj to disappear
Sep  6 22:01:16.187: INFO: Pod pod-service-account-71771918-f08c-11ea-b72c-0242ac110008-4mkkj no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:01:16.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-n729n" for this suite.
Sep  6 22:01:22.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:01:22.228: INFO: namespace: e2e-tests-svcaccounts-n729n, resource: bindings, ignored listing per whitelist
Sep  6 22:01:22.275: INFO: namespace e2e-tests-svcaccounts-n729n deletion completed in 6.084585357s

• [SLOW TEST:25.104 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:01:22.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-twjk6
Sep  6 22:01:26.460: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-twjk6
STEP: checking the pod's current state and verifying that restartCount is present
Sep  6 22:01:26.462: INFO: Initial restart count of pod liveness-http is 0
Sep  6 22:01:44.532: INFO: Restart count of pod e2e-tests-container-probe-twjk6/liveness-http is now 1 (18.069838898s elapsed)
Sep  6 22:02:04.612: INFO: Restart count of pod e2e-tests-container-probe-twjk6/liveness-http is now 2 (38.150015985s elapsed)
Sep  6 22:02:24.668: INFO: Restart count of pod e2e-tests-container-probe-twjk6/liveness-http is now 3 (58.206097496s elapsed)
Sep  6 22:02:44.766: INFO: Restart count of pod e2e-tests-container-probe-twjk6/liveness-http is now 4 (1m18.304121307s elapsed)
Sep  6 22:03:49.007: INFO: Restart count of pod e2e-tests-container-probe-twjk6/liveness-http is now 5 (2m22.544940414s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:03:49.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-twjk6" for this suite.
Sep  6 22:03:55.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:03:55.107: INFO: namespace: e2e-tests-container-probe-twjk6, resource: bindings, ignored listing per whitelist
Sep  6 22:03:55.110: INFO: namespace e2e-tests-container-probe-twjk6 deletion completed in 6.08449026s

• [SLOW TEST:152.834 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:03:55.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-db36014e-f08c-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume secrets
Sep  6 22:03:55.303: INFO: Waiting up to 5m0s for pod "pod-secrets-db429aa6-f08c-11ea-b72c-0242ac110008" in namespace "e2e-tests-secrets-xttjz" to be "success or failure"
Sep  6 22:03:55.330: INFO: Pod "pod-secrets-db429aa6-f08c-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 27.315351ms
Sep  6 22:03:57.381: INFO: Pod "pod-secrets-db429aa6-f08c-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077831876s
Sep  6 22:03:59.385: INFO: Pod "pod-secrets-db429aa6-f08c-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082219407s
STEP: Saw pod success
Sep  6 22:03:59.385: INFO: Pod "pod-secrets-db429aa6-f08c-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 22:03:59.389: INFO: Trying to get logs from node hunter-worker pod pod-secrets-db429aa6-f08c-11ea-b72c-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Sep  6 22:03:59.406: INFO: Waiting for pod pod-secrets-db429aa6-f08c-11ea-b72c-0242ac110008 to disappear
Sep  6 22:03:59.436: INFO: Pod pod-secrets-db429aa6-f08c-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:03:59.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xttjz" for this suite.
Sep  6 22:04:05.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:04:05.469: INFO: namespace: e2e-tests-secrets-xttjz, resource: bindings, ignored listing per whitelist
Sep  6 22:04:05.533: INFO: namespace e2e-tests-secrets-xttjz deletion completed in 6.092927641s
STEP: Destroying namespace "e2e-tests-secret-namespace-skpxr" for this suite.
Sep  6 22:04:11.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:04:11.578: INFO: namespace: e2e-tests-secret-namespace-skpxr, resource: bindings, ignored listing per whitelist
Sep  6 22:04:11.621: INFO: namespace e2e-tests-secret-namespace-skpxr deletion completed in 6.087929478s

• [SLOW TEST:16.512 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:04:11.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-e50c39e2-f08c-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume secrets
Sep  6 22:04:11.761: INFO: Waiting up to 5m0s for pod "pod-secrets-e51274f8-f08c-11ea-b72c-0242ac110008" in namespace "e2e-tests-secrets-gvbth" to be "success or failure"
Sep  6 22:04:11.770: INFO: Pod "pod-secrets-e51274f8-f08c-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.609479ms
Sep  6 22:04:13.773: INFO: Pod "pod-secrets-e51274f8-f08c-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012683138s
Sep  6 22:04:15.780: INFO: Pod "pod-secrets-e51274f8-f08c-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019094989s
STEP: Saw pod success
Sep  6 22:04:15.780: INFO: Pod "pod-secrets-e51274f8-f08c-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 22:04:15.782: INFO: Trying to get logs from node hunter-worker pod pod-secrets-e51274f8-f08c-11ea-b72c-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Sep  6 22:04:15.981: INFO: Waiting for pod pod-secrets-e51274f8-f08c-11ea-b72c-0242ac110008 to disappear
Sep  6 22:04:16.055: INFO: Pod pod-secrets-e51274f8-f08c-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:04:16.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-gvbth" for this suite.
Sep  6 22:04:22.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:04:22.157: INFO: namespace: e2e-tests-secrets-gvbth, resource: bindings, ignored listing per whitelist
Sep  6 22:04:22.172: INFO: namespace e2e-tests-secrets-gvbth deletion completed in 6.113756438s

• [SLOW TEST:10.551 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:04:22.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  6 22:04:22.284: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Sep  6 22:04:22.311: INFO: Pod name sample-pod: Found 0 pods out of 1
Sep  6 22:04:27.315: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Sep  6 22:04:27.315: INFO: Creating deployment "test-rolling-update-deployment"
Sep  6 22:04:27.323: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Sep  6 22:04:27.384: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Sep  6 22:04:29.392: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Sep  6 22:04:29.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735026667, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735026667, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735026667, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735026667, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  6 22:04:31.398: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Sep  6 22:04:31.408: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-rg7b5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rg7b5/deployments/test-rolling-update-deployment,UID:ee59d0b4-f08c-11ea-b060-0242ac120006,ResourceVersion:231234,Generation:1,CreationTimestamp:2020-09-06 22:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-09-06 22:04:27 +0000 UTC 2020-09-06 22:04:27 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-09-06 22:04:30 +0000 UTC 2020-09-06 22:04:27 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Sep  6 22:04:31.412: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-rg7b5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rg7b5/replicasets/test-rolling-update-deployment-75db98fb4c,UID:ee64f9fb-f08c-11ea-b060-0242ac120006,ResourceVersion:231225,Generation:1,CreationTimestamp:2020-09-06 22:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ee59d0b4-f08c-11ea-b060-0242ac120006 0xc00090c6b7 0xc00090c6b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Sep  6 22:04:31.412: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Sep  6 22:04:31.412: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-rg7b5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rg7b5/replicasets/test-rolling-update-controller,UID:eb5a24d5-f08c-11ea-b060-0242ac120006,ResourceVersion:231233,Generation:2,CreationTimestamp:2020-09-06 22:04:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ee59d0b4-f08c-11ea-b060-0242ac120006 0xc000d91e57 0xc000d91e58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep  6 22:04:31.416: INFO: Pod "test-rolling-update-deployment-75db98fb4c-tdjhw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-tdjhw,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-rg7b5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rg7b5/pods/test-rolling-update-deployment-75db98fb4c-tdjhw,UID:ee67159c-f08c-11ea-b060-0242ac120006,ResourceVersion:231224,Generation:0,CreationTimestamp:2020-09-06 22:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c ee64f9fb-f08c-11ea-b060-0242ac120006 0xc000ba89f7 0xc000ba89f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hzxfh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hzxfh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-hzxfh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ba8d90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ba8db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 22:04:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 22:04:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 22:04:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-06 22:04:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.178,StartTime:2020-09-06 22:04:27 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-09-06 22:04:30 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://50b2f0f6068c66191902ef03bdecfd43dc0bdec5ddddff387fc873484deda727}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:04:31.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-rg7b5" for this suite.
Sep  6 22:04:39.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:04:39.502: INFO: namespace: e2e-tests-deployment-rg7b5, resource: bindings, ignored listing per whitelist
Sep  6 22:04:39.523: INFO: namespace e2e-tests-deployment-rg7b5 deletion completed in 8.103418872s

• [SLOW TEST:17.350 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:04:39.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f5b75822-f08c-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume secrets
Sep  6 22:04:39.753: INFO: Waiting up to 5m0s for pod "pod-secrets-f5bbf367-f08c-11ea-b72c-0242ac110008" in namespace "e2e-tests-secrets-2sc5l" to be "success or failure"
Sep  6 22:04:39.788: INFO: Pod "pod-secrets-f5bbf367-f08c-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 35.051324ms
Sep  6 22:04:41.804: INFO: Pod "pod-secrets-f5bbf367-f08c-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051313821s
Sep  6 22:04:43.808: INFO: Pod "pod-secrets-f5bbf367-f08c-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055441213s
STEP: Saw pod success
Sep  6 22:04:43.808: INFO: Pod "pod-secrets-f5bbf367-f08c-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 22:04:43.811: INFO: Trying to get logs from node hunter-worker pod pod-secrets-f5bbf367-f08c-11ea-b72c-0242ac110008 container secret-env-test: 
STEP: delete the pod
Sep  6 22:04:43.955: INFO: Waiting for pod pod-secrets-f5bbf367-f08c-11ea-b72c-0242ac110008 to disappear
Sep  6 22:04:44.056: INFO: Pod pod-secrets-f5bbf367-f08c-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:04:44.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2sc5l" for this suite.
Sep  6 22:04:50.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:04:50.114: INFO: namespace: e2e-tests-secrets-2sc5l, resource: bindings, ignored listing per whitelist
Sep  6 22:04:50.194: INFO: namespace e2e-tests-secrets-2sc5l deletion completed in 6.13484151s

• [SLOW TEST:10.671 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:04:50.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-fc092543-f08c-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume secrets
Sep  6 22:04:50.364: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fc120e24-f08c-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-rb2sg" to be "success or failure"
Sep  6 22:04:50.393: INFO: Pod "pod-projected-secrets-fc120e24-f08c-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.159499ms
Sep  6 22:04:52.397: INFO: Pod "pod-projected-secrets-fc120e24-f08c-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032934222s
Sep  6 22:04:54.401: INFO: Pod "pod-projected-secrets-fc120e24-f08c-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036946595s
STEP: Saw pod success
Sep  6 22:04:54.401: INFO: Pod "pod-projected-secrets-fc120e24-f08c-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 22:04:54.403: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-fc120e24-f08c-11ea-b72c-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Sep  6 22:04:54.420: INFO: Waiting for pod pod-projected-secrets-fc120e24-f08c-11ea-b72c-0242ac110008 to disappear
Sep  6 22:04:54.430: INFO: Pod pod-projected-secrets-fc120e24-f08c-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:04:54.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rb2sg" for this suite.
Sep  6 22:05:00.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:05:00.539: INFO: namespace: e2e-tests-projected-rb2sg, resource: bindings, ignored listing per whitelist
Sep  6 22:05:00.566: INFO: namespace e2e-tests-projected-rb2sg deletion completed in 6.13341816s

• [SLOW TEST:10.372 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:05:00.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Sep  6 22:05:04.749: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-023f661f-f08d-11ea-b72c-0242ac110008", GenerateName:"", Namespace:"e2e-tests-pods-dt5tw", SelfLink:"/api/v1/namespaces/e2e-tests-pods-dt5tw/pods/pod-submit-remove-023f661f-f08d-11ea-b72c-0242ac110008", UID:"024080b3-f08d-11ea-b060-0242ac120006", ResourceVersion:"231389", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63735026700, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"698587458"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-r2hms", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002147ac0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r2hms", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002acdc88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001e471a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002acdcd0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002acdcf0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002acdcf8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002acdcfc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735026700, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735026703, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735026703, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735026700, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"10.244.2.180", StartTime:(*v1.Time)(0xc002451d40), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002451d60), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://d519d5f747bbe2a714342d3a987e856ef57012557eed50bde285d1581e0b31c9"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:05:10.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-dt5tw" for this suite.
Sep  6 22:05:16.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:05:16.162: INFO: namespace: e2e-tests-pods-dt5tw, resource: bindings, ignored listing per whitelist
Sep  6 22:05:16.190: INFO: namespace e2e-tests-pods-dt5tw deletion completed in 6.093828781s

• [SLOW TEST:15.624 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:05:16.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-sgr55
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Sep  6 22:05:16.309: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Sep  6 22:05:38.486: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.181:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-sgr55 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  6 22:05:38.486: INFO: >>> kubeConfig: /root/.kube/config
I0906 22:05:38.517474       7 log.go:172] (0xc0008fda20) (0xc0026ca960) Create stream
I0906 22:05:38.517501       7 log.go:172] (0xc0008fda20) (0xc0026ca960) Stream added, broadcasting: 1
I0906 22:05:38.519779       7 log.go:172] (0xc0008fda20) Reply frame received for 1
I0906 22:05:38.519839       7 log.go:172] (0xc0008fda20) (0xc0026914a0) Create stream
I0906 22:05:38.519868       7 log.go:172] (0xc0008fda20) (0xc0026914a0) Stream added, broadcasting: 3
I0906 22:05:38.520980       7 log.go:172] (0xc0008fda20) Reply frame received for 3
I0906 22:05:38.521015       7 log.go:172] (0xc0008fda20) (0xc0026caa00) Create stream
I0906 22:05:38.521025       7 log.go:172] (0xc0008fda20) (0xc0026caa00) Stream added, broadcasting: 5
I0906 22:05:38.521943       7 log.go:172] (0xc0008fda20) Reply frame received for 5
I0906 22:05:38.601025       7 log.go:172] (0xc0008fda20) Data frame received for 3
I0906 22:05:38.601117       7 log.go:172] (0xc0026914a0) (3) Data frame handling
I0906 22:05:38.601211       7 log.go:172] (0xc0026914a0) (3) Data frame sent
I0906 22:05:38.601284       7 log.go:172] (0xc0008fda20) Data frame received for 3
I0906 22:05:38.601328       7 log.go:172] (0xc0026914a0) (3) Data frame handling
I0906 22:05:38.601472       7 log.go:172] (0xc0008fda20) Data frame received for 5
I0906 22:05:38.601513       7 log.go:172] (0xc0026caa00) (5) Data frame handling
I0906 22:05:38.603449       7 log.go:172] (0xc0008fda20) Data frame received for 1
I0906 22:05:38.603494       7 log.go:172] (0xc0026ca960) (1) Data frame handling
I0906 22:05:38.603560       7 log.go:172] (0xc0026ca960) (1) Data frame sent
I0906 22:05:38.603592       7 log.go:172] (0xc0008fda20) (0xc0026ca960) Stream removed, broadcasting: 1
I0906 22:05:38.603632       7 log.go:172] (0xc0008fda20) Go away received
I0906 22:05:38.603798       7 log.go:172] (0xc0008fda20) (0xc0026ca960) Stream removed, broadcasting: 1
I0906 22:05:38.603829       7 log.go:172] (0xc0008fda20) (0xc0026914a0) Stream removed, broadcasting: 3
I0906 22:05:38.603839       7 log.go:172] (0xc0008fda20) (0xc0026caa00) Stream removed, broadcasting: 5
Sep  6 22:05:38.603: INFO: Found all expected endpoints: [netserver-0]
Sep  6 22:05:38.607: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.173:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-sgr55 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  6 22:05:38.607: INFO: >>> kubeConfig: /root/.kube/config
I0906 22:05:38.649644       7 log.go:172] (0xc00045f760) (0xc002548dc0) Create stream
I0906 22:05:38.649682       7 log.go:172] (0xc00045f760) (0xc002548dc0) Stream added, broadcasting: 1
I0906 22:05:38.653598       7 log.go:172] (0xc00045f760) Reply frame received for 1
I0906 22:05:38.653698       7 log.go:172] (0xc00045f760) (0xc002b84000) Create stream
I0906 22:05:38.653724       7 log.go:172] (0xc00045f760) (0xc002b84000) Stream added, broadcasting: 3
I0906 22:05:38.654701       7 log.go:172] (0xc00045f760) Reply frame received for 3
I0906 22:05:38.654752       7 log.go:172] (0xc00045f760) (0xc002b36000) Create stream
I0906 22:05:38.654769       7 log.go:172] (0xc00045f760) (0xc002b36000) Stream added, broadcasting: 5
I0906 22:05:38.656235       7 log.go:172] (0xc00045f760) Reply frame received for 5
I0906 22:05:38.713201       7 log.go:172] (0xc00045f760) Data frame received for 3
I0906 22:05:38.713298       7 log.go:172] (0xc002b84000) (3) Data frame handling
I0906 22:05:38.713350       7 log.go:172] (0xc002b84000) (3) Data frame sent
I0906 22:05:38.713600       7 log.go:172] (0xc00045f760) Data frame received for 5
I0906 22:05:38.713661       7 log.go:172] (0xc002b36000) (5) Data frame handling
I0906 22:05:38.713697       7 log.go:172] (0xc00045f760) Data frame received for 3
I0906 22:05:38.713722       7 log.go:172] (0xc002b84000) (3) Data frame handling
I0906 22:05:38.715511       7 log.go:172] (0xc00045f760) Data frame received for 1
I0906 22:05:38.715542       7 log.go:172] (0xc002548dc0) (1) Data frame handling
I0906 22:05:38.715557       7 log.go:172] (0xc002548dc0) (1) Data frame sent
I0906 22:05:38.715582       7 log.go:172] (0xc00045f760) (0xc002548dc0) Stream removed, broadcasting: 1
I0906 22:05:38.715656       7 log.go:172] (0xc00045f760) (0xc002548dc0) Stream removed, broadcasting: 1
I0906 22:05:38.715679       7 log.go:172] (0xc00045f760) (0xc002b84000) Stream removed, broadcasting: 3
I0906 22:05:38.716104       7 log.go:172] (0xc00045f760) (0xc002b36000) Stream removed, broadcasting: 5
Sep  6 22:05:38.716: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:05:38.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-sgr55" for this suite.
Sep  6 22:06:02.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:06:02.743: INFO: namespace: e2e-tests-pod-network-test-sgr55, resource: bindings, ignored listing per whitelist
Sep  6 22:06:02.812: INFO: namespace e2e-tests-pod-network-test-sgr55 deletion completed in 24.091521806s

• [SLOW TEST:46.622 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:06:02.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-2755246c-f08d-11ea-b72c-0242ac110008
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-2755246c-f08d-11ea-b72c-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:07:33.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qg2ck" for this suite.
Sep  6 22:07:55.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:07:55.479: INFO: namespace: e2e-tests-projected-qg2ck, resource: bindings, ignored listing per whitelist
Sep  6 22:07:55.488: INFO: namespace e2e-tests-projected-qg2ck deletion completed in 22.102206767s

• [SLOW TEST:112.676 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:07:55.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-pss64/configmap-test-6a798a70-f08d-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume configMaps
Sep  6 22:07:55.597: INFO: Waiting up to 5m0s for pod "pod-configmaps-6a7c9feb-f08d-11ea-b72c-0242ac110008" in namespace "e2e-tests-configmap-pss64" to be "success or failure"
Sep  6 22:07:55.646: INFO: Pod "pod-configmaps-6a7c9feb-f08d-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 48.383417ms
Sep  6 22:07:57.670: INFO: Pod "pod-configmaps-6a7c9feb-f08d-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07199578s
Sep  6 22:07:59.674: INFO: Pod "pod-configmaps-6a7c9feb-f08d-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076348796s
STEP: Saw pod success
Sep  6 22:07:59.674: INFO: Pod "pod-configmaps-6a7c9feb-f08d-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 22:07:59.677: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-6a7c9feb-f08d-11ea-b72c-0242ac110008 container env-test: 
STEP: delete the pod
Sep  6 22:07:59.702: INFO: Waiting for pod pod-configmaps-6a7c9feb-f08d-11ea-b72c-0242ac110008 to disappear
Sep  6 22:07:59.706: INFO: Pod pod-configmaps-6a7c9feb-f08d-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:07:59.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pss64" for this suite.
Sep  6 22:08:05.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:08:05.725: INFO: namespace: e2e-tests-configmap-pss64, resource: bindings, ignored listing per whitelist
Sep  6 22:08:05.789: INFO: namespace e2e-tests-configmap-pss64 deletion completed in 6.079924509s

• [SLOW TEST:10.301 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:08:05.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-70a1f527-f08d-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume configMaps
Sep  6 22:08:05.913: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70a3aa55-f08d-11ea-b72c-0242ac110008" in namespace "e2e-tests-projected-7sbzk" to be "success or failure"
Sep  6 22:08:05.951: INFO: Pod "pod-projected-configmaps-70a3aa55-f08d-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 38.122739ms
Sep  6 22:08:07.955: INFO: Pod "pod-projected-configmaps-70a3aa55-f08d-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042264927s
Sep  6 22:08:09.959: INFO: Pod "pod-projected-configmaps-70a3aa55-f08d-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046513455s
STEP: Saw pod success
Sep  6 22:08:09.960: INFO: Pod "pod-projected-configmaps-70a3aa55-f08d-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 22:08:09.963: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-70a3aa55-f08d-11ea-b72c-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  6 22:08:10.000: INFO: Waiting for pod pod-projected-configmaps-70a3aa55-f08d-11ea-b72c-0242ac110008 to disappear
Sep  6 22:08:10.019: INFO: Pod pod-projected-configmaps-70a3aa55-f08d-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:08:10.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7sbzk" for this suite.
Sep  6 22:08:16.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:08:16.113: INFO: namespace: e2e-tests-projected-7sbzk, resource: bindings, ignored listing per whitelist
Sep  6 22:08:16.128: INFO: namespace e2e-tests-projected-7sbzk deletion completed in 6.105739257s

• [SLOW TEST:10.339 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:08:16.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Sep  6 22:08:16.332: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep  6 22:08:16.352: INFO: Waiting for terminating namespaces to be deleted...
Sep  6 22:08:16.355: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Sep  6 22:08:16.359: INFO: kindnet-4qkqp from kube-system started at 2020-09-05 13:37:22 +0000 UTC (1 container statuses recorded)
Sep  6 22:08:16.359: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep  6 22:08:16.359: INFO: kube-proxy-t9g4m from kube-system started at 2020-09-05 13:37:20 +0000 UTC (1 container statuses recorded)
Sep  6 22:08:16.359: INFO: 	Container kube-proxy ready: true, restart count 0
Sep  6 22:08:16.359: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Sep  6 22:08:16.363: INFO: kindnet-z7tw7 from kube-system started at 2020-09-05 13:37:22 +0000 UTC (1 container statuses recorded)
Sep  6 22:08:16.363: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep  6 22:08:16.363: INFO: kube-proxy-vl5mq from kube-system started at 2020-09-05 13:37:20 +0000 UTC (1 container statuses recorded)
Sep  6 22:08:16.363: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.16325132d1a7caaa], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:08:17.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-jpqd4" for this suite.
Sep  6 22:08:23.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:08:23.511: INFO: namespace: e2e-tests-sched-pred-jpqd4, resource: bindings, ignored listing per whitelist
Sep  6 22:08:23.527: INFO: namespace e2e-tests-sched-pred-jpqd4 deletion completed in 6.142250031s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.399 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:08:23.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-lxrfs
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Sep  6 22:08:23.672: INFO: Found 0 stateful pods, waiting for 3
Sep  6 22:08:33.688: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep  6 22:08:33.688: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep  6 22:08:33.688: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Sep  6 22:08:43.678: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep  6 22:08:43.678: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep  6 22:08:43.678: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Sep  6 22:08:43.705: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Sep  6 22:08:53.789: INFO: Updating stateful set ss2
Sep  6 22:08:53.797: INFO: Waiting for Pod e2e-tests-statefulset-lxrfs/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Sep  6 22:09:04.348: INFO: Found 2 stateful pods, waiting for 3
Sep  6 22:09:14.353: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep  6 22:09:14.353: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep  6 22:09:14.353: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Sep  6 22:09:14.377: INFO: Updating stateful set ss2
Sep  6 22:09:14.402: INFO: Waiting for Pod e2e-tests-statefulset-lxrfs/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Sep  6 22:09:24.428: INFO: Updating stateful set ss2
Sep  6 22:09:24.451: INFO: Waiting for StatefulSet e2e-tests-statefulset-lxrfs/ss2 to complete update
Sep  6 22:09:24.451: INFO: Waiting for Pod e2e-tests-statefulset-lxrfs/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Sep  6 22:09:34.460: INFO: Deleting all statefulset in ns e2e-tests-statefulset-lxrfs
Sep  6 22:09:34.462: INFO: Scaling statefulset ss2 to 0
Sep  6 22:09:54.481: INFO: Waiting for statefulset status.replicas updated to 0
Sep  6 22:09:54.483: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:09:54.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-lxrfs" for this suite.
Sep  6 22:10:00.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:10:00.606: INFO: namespace: e2e-tests-statefulset-lxrfs, resource: bindings, ignored listing per whitelist
Sep  6 22:10:00.622: INFO: namespace e2e-tests-statefulset-lxrfs deletion completed in 6.120664349s

• [SLOW TEST:97.094 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:10:00.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Sep  6 22:10:00.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:03.265: INFO: stderr: ""
Sep  6 22:10:03.265: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  6 22:10:03.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:03.414: INFO: stderr: ""
Sep  6 22:10:03.414: INFO: stdout: "update-demo-nautilus-kz67b update-demo-nautilus-vnqf9 "
Sep  6 22:10:03.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kz67b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:03.524: INFO: stderr: ""
Sep  6 22:10:03.524: INFO: stdout: ""
Sep  6 22:10:03.524: INFO: update-demo-nautilus-kz67b is created but not running
Sep  6 22:10:08.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:08.622: INFO: stderr: ""
Sep  6 22:10:08.622: INFO: stdout: "update-demo-nautilus-kz67b update-demo-nautilus-vnqf9 "
Sep  6 22:10:08.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kz67b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:08.726: INFO: stderr: ""
Sep  6 22:10:08.726: INFO: stdout: "true"
Sep  6 22:10:08.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kz67b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:08.845: INFO: stderr: ""
Sep  6 22:10:08.845: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  6 22:10:08.845: INFO: validating pod update-demo-nautilus-kz67b
Sep  6 22:10:08.850: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  6 22:10:08.850: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  6 22:10:08.850: INFO: update-demo-nautilus-kz67b is verified up and running
Sep  6 22:10:08.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vnqf9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:08.952: INFO: stderr: ""
Sep  6 22:10:08.952: INFO: stdout: "true"
Sep  6 22:10:08.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vnqf9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:09.042: INFO: stderr: ""
Sep  6 22:10:09.042: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  6 22:10:09.042: INFO: validating pod update-demo-nautilus-vnqf9
Sep  6 22:10:09.046: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  6 22:10:09.046: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  6 22:10:09.046: INFO: update-demo-nautilus-vnqf9 is verified up and running
STEP: scaling down the replication controller
Sep  6 22:10:09.049: INFO: scanned /root for discovery docs: 
Sep  6 22:10:09.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:10.195: INFO: stderr: ""
Sep  6 22:10:10.195: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  6 22:10:10.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:10.309: INFO: stderr: ""
Sep  6 22:10:10.309: INFO: stdout: "update-demo-nautilus-kz67b update-demo-nautilus-vnqf9 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Sep  6 22:10:15.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:15.425: INFO: stderr: ""
Sep  6 22:10:15.425: INFO: stdout: "update-demo-nautilus-kz67b update-demo-nautilus-vnqf9 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Sep  6 22:10:20.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:20.523: INFO: stderr: ""
Sep  6 22:10:20.524: INFO: stdout: "update-demo-nautilus-kz67b "
Sep  6 22:10:20.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kz67b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:20.616: INFO: stderr: ""
Sep  6 22:10:20.616: INFO: stdout: "true"
Sep  6 22:10:20.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kz67b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:20.700: INFO: stderr: ""
Sep  6 22:10:20.700: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  6 22:10:20.700: INFO: validating pod update-demo-nautilus-kz67b
Sep  6 22:10:20.703: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  6 22:10:20.703: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  6 22:10:20.703: INFO: update-demo-nautilus-kz67b is verified up and running
STEP: scaling up the replication controller
Sep  6 22:10:20.705: INFO: scanned /root for discovery docs: 
Sep  6 22:10:20.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:21.827: INFO: stderr: ""
Sep  6 22:10:21.828: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  6 22:10:21.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:21.937: INFO: stderr: ""
Sep  6 22:10:21.937: INFO: stdout: "update-demo-nautilus-blhkw update-demo-nautilus-kz67b "
Sep  6 22:10:21.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-blhkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:22.036: INFO: stderr: ""
Sep  6 22:10:22.036: INFO: stdout: ""
Sep  6 22:10:22.036: INFO: update-demo-nautilus-blhkw is created but not running
Sep  6 22:10:27.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:27.155: INFO: stderr: ""
Sep  6 22:10:27.155: INFO: stdout: "update-demo-nautilus-blhkw update-demo-nautilus-kz67b "
Sep  6 22:10:27.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-blhkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:27.255: INFO: stderr: ""
Sep  6 22:10:27.255: INFO: stdout: "true"
Sep  6 22:10:27.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-blhkw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:27.345: INFO: stderr: ""
Sep  6 22:10:27.345: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  6 22:10:27.345: INFO: validating pod update-demo-nautilus-blhkw
Sep  6 22:10:27.349: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  6 22:10:27.349: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  6 22:10:27.349: INFO: update-demo-nautilus-blhkw is verified up and running
Sep  6 22:10:27.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kz67b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:27.451: INFO: stderr: ""
Sep  6 22:10:27.451: INFO: stdout: "true"
Sep  6 22:10:27.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kz67b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:27.544: INFO: stderr: ""
Sep  6 22:10:27.544: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  6 22:10:27.544: INFO: validating pod update-demo-nautilus-kz67b
Sep  6 22:10:27.548: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  6 22:10:27.548: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  6 22:10:27.548: INFO: update-demo-nautilus-kz67b is verified up and running
STEP: using delete to clean up resources
Sep  6 22:10:27.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:27.674: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  6 22:10:27.674: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Sep  6 22:10:27.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-6jc29'
Sep  6 22:10:27.772: INFO: stderr: "No resources found.\n"
Sep  6 22:10:27.772: INFO: stdout: ""
Sep  6 22:10:27.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-6jc29 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep  6 22:10:28.073: INFO: stderr: ""
Sep  6 22:10:28.073: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:10:28.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6jc29" for this suite.
Sep  6 22:10:34.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:10:34.212: INFO: namespace: e2e-tests-kubectl-6jc29, resource: bindings, ignored listing per whitelist
Sep  6 22:10:34.212: INFO: namespace e2e-tests-kubectl-6jc29 deletion completed in 6.134688336s

• [SLOW TEST:33.590 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:10:34.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep  6 22:10:34.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-fckcw'
Sep  6 22:10:34.562: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep  6 22:10:34.562: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Sep  6 22:10:36.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-fckcw'
Sep  6 22:10:36.685: INFO: stderr: ""
Sep  6 22:10:36.685: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:10:36.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fckcw" for this suite.
Sep  6 22:11:58.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:11:58.720: INFO: namespace: e2e-tests-kubectl-fckcw, resource: bindings, ignored listing per whitelist
Sep  6 22:11:58.774: INFO: namespace e2e-tests-kubectl-fckcw deletion completed in 1m22.086004034s

• [SLOW TEST:84.562 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:11:58.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Sep  6 22:11:58.877: INFO: Waiting up to 5m0s for pod "client-containers-fb7e7e6e-f08d-11ea-b72c-0242ac110008" in namespace "e2e-tests-containers-67qwc" to be "success or failure"
Sep  6 22:11:58.880: INFO: Pod "client-containers-fb7e7e6e-f08d-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.832337ms
Sep  6 22:12:00.883: INFO: Pod "client-containers-fb7e7e6e-f08d-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006779503s
Sep  6 22:12:02.887: INFO: Pod "client-containers-fb7e7e6e-f08d-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010789888s
STEP: Saw pod success
Sep  6 22:12:02.887: INFO: Pod "client-containers-fb7e7e6e-f08d-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 22:12:02.890: INFO: Trying to get logs from node hunter-worker pod client-containers-fb7e7e6e-f08d-11ea-b72c-0242ac110008 container test-container: 
STEP: delete the pod
Sep  6 22:12:02.918: INFO: Waiting for pod client-containers-fb7e7e6e-f08d-11ea-b72c-0242ac110008 to disappear
Sep  6 22:12:02.946: INFO: Pod client-containers-fb7e7e6e-f08d-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:12:02.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-67qwc" for this suite.
Sep  6 22:12:08.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:12:09.045: INFO: namespace: e2e-tests-containers-67qwc, resource: bindings, ignored listing per whitelist
Sep  6 22:12:09.062: INFO: namespace e2e-tests-containers-67qwc deletion completed in 6.113212308s

• [SLOW TEST:10.287 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:12:09.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-01a043c2-f08e-11ea-b72c-0242ac110008
STEP: Creating a pod to test consume secrets
Sep  6 22:12:09.181: INFO: Waiting up to 5m0s for pod "pod-secrets-01a228f9-f08e-11ea-b72c-0242ac110008" in namespace "e2e-tests-secrets-cp7sw" to be "success or failure"
Sep  6 22:12:09.185: INFO: Pod "pod-secrets-01a228f9-f08e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494122ms
Sep  6 22:12:11.190: INFO: Pod "pod-secrets-01a228f9-f08e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008743182s
Sep  6 22:12:13.194: INFO: Pod "pod-secrets-01a228f9-f08e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012944834s
STEP: Saw pod success
Sep  6 22:12:13.194: INFO: Pod "pod-secrets-01a228f9-f08e-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 22:12:13.197: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-01a228f9-f08e-11ea-b72c-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Sep  6 22:12:13.237: INFO: Waiting for pod pod-secrets-01a228f9-f08e-11ea-b72c-0242ac110008 to disappear
Sep  6 22:12:13.271: INFO: Pod pod-secrets-01a228f9-f08e-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:12:13.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-cp7sw" for this suite.
Sep  6 22:12:19.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:12:19.322: INFO: namespace: e2e-tests-secrets-cp7sw, resource: bindings, ignored listing per whitelist
Sep  6 22:12:19.349: INFO: namespace e2e-tests-secrets-cp7sw deletion completed in 6.075107764s

• [SLOW TEST:10.287 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:12:19.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Sep  6 22:12:19.452: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qqh2t,SelfLink:/api/v1/namespaces/e2e-tests-watch-qqh2t/configmaps/e2e-watch-test-watch-closed,UID:07c128d8-f08e-11ea-b060-0242ac120006,ResourceVersion:232835,Generation:0,CreationTimestamp:2020-09-06 22:12:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep  6 22:12:19.452: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qqh2t,SelfLink:/api/v1/namespaces/e2e-tests-watch-qqh2t/configmaps/e2e-watch-test-watch-closed,UID:07c128d8-f08e-11ea-b060-0242ac120006,ResourceVersion:232836,Generation:0,CreationTimestamp:2020-09-06 22:12:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Sep  6 22:12:19.494: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qqh2t,SelfLink:/api/v1/namespaces/e2e-tests-watch-qqh2t/configmaps/e2e-watch-test-watch-closed,UID:07c128d8-f08e-11ea-b060-0242ac120006,ResourceVersion:232837,Generation:0,CreationTimestamp:2020-09-06 22:12:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep  6 22:12:19.494: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qqh2t,SelfLink:/api/v1/namespaces/e2e-tests-watch-qqh2t/configmaps/e2e-watch-test-watch-closed,UID:07c128d8-f08e-11ea-b060-0242ac120006,ResourceVersion:232838,Generation:0,CreationTimestamp:2020-09-06 22:12:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:12:19.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-qqh2t" for this suite.
Sep  6 22:12:25.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:12:25.576: INFO: namespace: e2e-tests-watch-qqh2t, resource: bindings, ignored listing per whitelist
Sep  6 22:12:25.587: INFO: namespace e2e-tests-watch-qqh2t deletion completed in 6.08247394s

• [SLOW TEST:6.237 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:12:25.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Sep  6 22:12:25.697: INFO: Waiting up to 5m0s for pod "client-containers-0b79e838-f08e-11ea-b72c-0242ac110008" in namespace "e2e-tests-containers-889th" to be "success or failure"
Sep  6 22:12:25.701: INFO: Pod "client-containers-0b79e838-f08e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.547109ms
Sep  6 22:12:27.704: INFO: Pod "client-containers-0b79e838-f08e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00690748s
Sep  6 22:12:29.769: INFO: Pod "client-containers-0b79e838-f08e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071391668s
STEP: Saw pod success
Sep  6 22:12:29.769: INFO: Pod "client-containers-0b79e838-f08e-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 22:12:29.772: INFO: Trying to get logs from node hunter-worker pod client-containers-0b79e838-f08e-11ea-b72c-0242ac110008 container test-container: 
STEP: delete the pod
Sep  6 22:12:29.809: INFO: Waiting for pod client-containers-0b79e838-f08e-11ea-b72c-0242ac110008 to disappear
Sep  6 22:12:29.827: INFO: Pod client-containers-0b79e838-f08e-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:12:29.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-889th" for this suite.
Sep  6 22:12:35.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:12:35.852: INFO: namespace: e2e-tests-containers-889th, resource: bindings, ignored listing per whitelist
Sep  6 22:12:35.925: INFO: namespace e2e-tests-containers-889th deletion completed in 6.093151532s

• [SLOW TEST:10.338 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:12:35.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Sep  6 22:12:36.059: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-886hb,SelfLink:/api/v1/namespaces/e2e-tests-watch-886hb/configmaps/e2e-watch-test-configmap-a,UID:11a9ae6e-f08e-11ea-b060-0242ac120006,ResourceVersion:232904,Generation:0,CreationTimestamp:2020-09-06 22:12:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep  6 22:12:36.059: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-886hb,SelfLink:/api/v1/namespaces/e2e-tests-watch-886hb/configmaps/e2e-watch-test-configmap-a,UID:11a9ae6e-f08e-11ea-b060-0242ac120006,ResourceVersion:232904,Generation:0,CreationTimestamp:2020-09-06 22:12:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Sep  6 22:12:46.069: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-886hb,SelfLink:/api/v1/namespaces/e2e-tests-watch-886hb/configmaps/e2e-watch-test-configmap-a,UID:11a9ae6e-f08e-11ea-b060-0242ac120006,ResourceVersion:232924,Generation:0,CreationTimestamp:2020-09-06 22:12:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Sep  6 22:12:46.069: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-886hb,SelfLink:/api/v1/namespaces/e2e-tests-watch-886hb/configmaps/e2e-watch-test-configmap-a,UID:11a9ae6e-f08e-11ea-b060-0242ac120006,ResourceVersion:232924,Generation:0,CreationTimestamp:2020-09-06 22:12:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Sep  6 22:12:56.077: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-886hb,SelfLink:/api/v1/namespaces/e2e-tests-watch-886hb/configmaps/e2e-watch-test-configmap-a,UID:11a9ae6e-f08e-11ea-b060-0242ac120006,ResourceVersion:232944,Generation:0,CreationTimestamp:2020-09-06 22:12:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep  6 22:12:56.077: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-886hb,SelfLink:/api/v1/namespaces/e2e-tests-watch-886hb/configmaps/e2e-watch-test-configmap-a,UID:11a9ae6e-f08e-11ea-b060-0242ac120006,ResourceVersion:232944,Generation:0,CreationTimestamp:2020-09-06 22:12:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Sep  6 22:13:06.084: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-886hb,SelfLink:/api/v1/namespaces/e2e-tests-watch-886hb/configmaps/e2e-watch-test-configmap-a,UID:11a9ae6e-f08e-11ea-b060-0242ac120006,ResourceVersion:232964,Generation:0,CreationTimestamp:2020-09-06 22:12:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep  6 22:13:06.084: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-886hb,SelfLink:/api/v1/namespaces/e2e-tests-watch-886hb/configmaps/e2e-watch-test-configmap-a,UID:11a9ae6e-f08e-11ea-b060-0242ac120006,ResourceVersion:232964,Generation:0,CreationTimestamp:2020-09-06 22:12:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Sep  6 22:13:16.091: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-886hb,SelfLink:/api/v1/namespaces/e2e-tests-watch-886hb/configmaps/e2e-watch-test-configmap-b,UID:29859571-f08e-11ea-b060-0242ac120006,ResourceVersion:232984,Generation:0,CreationTimestamp:2020-09-06 22:13:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep  6 22:13:16.091: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-886hb,SelfLink:/api/v1/namespaces/e2e-tests-watch-886hb/configmaps/e2e-watch-test-configmap-b,UID:29859571-f08e-11ea-b060-0242ac120006,ResourceVersion:232984,Generation:0,CreationTimestamp:2020-09-06 22:13:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Sep  6 22:13:26.098: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-886hb,SelfLink:/api/v1/namespaces/e2e-tests-watch-886hb/configmaps/e2e-watch-test-configmap-b,UID:29859571-f08e-11ea-b060-0242ac120006,ResourceVersion:233004,Generation:0,CreationTimestamp:2020-09-06 22:13:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep  6 22:13:26.098: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-886hb,SelfLink:/api/v1/namespaces/e2e-tests-watch-886hb/configmaps/e2e-watch-test-configmap-b,UID:29859571-f08e-11ea-b060-0242ac120006,ResourceVersion:233004,Generation:0,CreationTimestamp:2020-09-06 22:13:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:13:36.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-886hb" for this suite.
Sep  6 22:13:42.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:13:42.134: INFO: namespace: e2e-tests-watch-886hb, resource: bindings, ignored listing per whitelist
Sep  6 22:13:42.188: INFO: namespace e2e-tests-watch-886hb deletion completed in 6.084033416s

• [SLOW TEST:66.263 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:13:42.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Sep  6 22:13:42.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Sep  6 22:13:42.368: INFO: stderr: ""
Sep  6 22:13:42.368: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45441\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45441/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:13:42.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hlvc2" for this suite.
Sep  6 22:13:48.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:13:48.399: INFO: namespace: e2e-tests-kubectl-hlvc2, resource: bindings, ignored listing per whitelist
Sep  6 22:13:48.474: INFO: namespace e2e-tests-kubectl-hlvc2 deletion completed in 6.101833204s

• [SLOW TEST:6.286 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:13:48.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Sep  6 22:13:48.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rzwfh'
Sep  6 22:13:48.862: INFO: stderr: ""
Sep  6 22:13:48.862: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Sep  6 22:13:49.867: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 22:13:49.867: INFO: Found 0 / 1
Sep  6 22:13:50.867: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 22:13:50.867: INFO: Found 0 / 1
Sep  6 22:13:51.865: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 22:13:51.865: INFO: Found 0 / 1
Sep  6 22:13:52.867: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 22:13:52.867: INFO: Found 1 / 1
Sep  6 22:13:52.867: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Sep  6 22:13:52.871: INFO: Selector matched 1 pods for map[app:redis]
Sep  6 22:13:52.871: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Sep  6 22:13:52.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jlfr5 redis-master --namespace=e2e-tests-kubectl-rzwfh'
Sep  6 22:13:52.983: INFO: stderr: ""
Sep  6 22:13:52.983: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 06 Sep 22:13:51.465 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Sep 22:13:51.465 # Server started, Redis version 3.2.12\n1:M 06 Sep 22:13:51.465 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Sep 22:13:51.465 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Sep  6 22:13:52.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-jlfr5 redis-master --namespace=e2e-tests-kubectl-rzwfh --tail=1'
Sep  6 22:13:53.082: INFO: stderr: ""
Sep  6 22:13:53.082: INFO: stdout: "1:M 06 Sep 22:13:51.465 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Sep  6 22:13:53.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-jlfr5 redis-master --namespace=e2e-tests-kubectl-rzwfh --limit-bytes=1'
Sep  6 22:13:53.198: INFO: stderr: ""
Sep  6 22:13:53.198: INFO: stdout: " "
STEP: exposing timestamps
Sep  6 22:13:53.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-jlfr5 redis-master --namespace=e2e-tests-kubectl-rzwfh --tail=1 --timestamps'
Sep  6 22:13:53.314: INFO: stderr: ""
Sep  6 22:13:53.314: INFO: stdout: "2020-09-06T22:13:51.465465551Z 1:M 06 Sep 22:13:51.465 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Sep  6 22:13:55.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-jlfr5 redis-master --namespace=e2e-tests-kubectl-rzwfh --since=1s'
Sep  6 22:13:55.924: INFO: stderr: ""
Sep  6 22:13:55.924: INFO: stdout: ""
Sep  6 22:13:55.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-jlfr5 redis-master --namespace=e2e-tests-kubectl-rzwfh --since=24h'
Sep  6 22:13:56.051: INFO: stderr: ""
Sep  6 22:13:56.051: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 06 Sep 22:13:51.465 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Sep 22:13:51.465 # Server started, Redis version 3.2.12\n1:M 06 Sep 22:13:51.465 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Sep 22:13:51.465 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Sep  6 22:13:56.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rzwfh'
Sep  6 22:13:56.166: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  6 22:13:56.166: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Sep  6 22:13:56.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-rzwfh'
Sep  6 22:13:56.271: INFO: stderr: "No resources found.\n"
Sep  6 22:13:56.271: INFO: stdout: ""
Sep  6 22:13:56.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-rzwfh -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep  6 22:13:56.366: INFO: stderr: ""
Sep  6 22:13:56.366: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:13:56.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rzwfh" for this suite.
Sep  6 22:14:02.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:14:02.452: INFO: namespace: e2e-tests-kubectl-rzwfh, resource: bindings, ignored listing per whitelist
Sep  6 22:14:02.492: INFO: namespace e2e-tests-kubectl-rzwfh deletion completed in 6.122304296s

• [SLOW TEST:14.018 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:14:02.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Sep  6 22:14:02.610: INFO: Waiting up to 5m0s for pod "downward-api-453e0485-f08e-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-fvtdt" to be "success or failure"
Sep  6 22:14:02.613: INFO: Pod "downward-api-453e0485-f08e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.856522ms
Sep  6 22:14:04.746: INFO: Pod "downward-api-453e0485-f08e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136626282s
Sep  6 22:14:06.750: INFO: Pod "downward-api-453e0485-f08e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.140657254s
STEP: Saw pod success
Sep  6 22:14:06.750: INFO: Pod "downward-api-453e0485-f08e-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 22:14:06.753: INFO: Trying to get logs from node hunter-worker pod downward-api-453e0485-f08e-11ea-b72c-0242ac110008 container dapi-container: 
STEP: delete the pod
Sep  6 22:14:06.925: INFO: Waiting for pod downward-api-453e0485-f08e-11ea-b72c-0242ac110008 to disappear
Sep  6 22:14:06.929: INFO: Pod downward-api-453e0485-f08e-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:14:06.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fvtdt" for this suite.
Sep  6 22:14:12.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:14:13.003: INFO: namespace: e2e-tests-downward-api-fvtdt, resource: bindings, ignored listing per whitelist
Sep  6 22:14:13.023: INFO: namespace e2e-tests-downward-api-fvtdt deletion completed in 6.089707032s

• [SLOW TEST:10.530 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:14:13.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Sep  6 22:14:13.147: INFO: Waiting up to 5m0s for pod "downward-api-4b873d65-f08e-11ea-b72c-0242ac110008" in namespace "e2e-tests-downward-api-d6s4w" to be "success or failure"
Sep  6 22:14:13.151: INFO: Pod "downward-api-4b873d65-f08e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.717339ms
Sep  6 22:14:15.155: INFO: Pod "downward-api-4b873d65-f08e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007963773s
Sep  6 22:14:17.159: INFO: Pod "downward-api-4b873d65-f08e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01211859s
Sep  6 22:14:19.164: INFO: Pod "downward-api-4b873d65-f08e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016861897s
Sep  6 22:14:21.169: INFO: Pod "downward-api-4b873d65-f08e-11ea-b72c-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021710199s
Sep  6 22:14:23.173: INFO: Pod "downward-api-4b873d65-f08e-11ea-b72c-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 10.025555832s
Sep  6 22:14:25.291: INFO: Pod "downward-api-4b873d65-f08e-11ea-b72c-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.144026479s
STEP: Saw pod success
Sep  6 22:14:25.291: INFO: Pod "downward-api-4b873d65-f08e-11ea-b72c-0242ac110008" satisfied condition "success or failure"
Sep  6 22:14:25.294: INFO: Trying to get logs from node hunter-worker2 pod downward-api-4b873d65-f08e-11ea-b72c-0242ac110008 container dapi-container: 
STEP: delete the pod
Sep  6 22:14:25.568: INFO: Waiting for pod downward-api-4b873d65-f08e-11ea-b72c-0242ac110008 to disappear
Sep  6 22:14:25.608: INFO: Pod downward-api-4b873d65-f08e-11ea-b72c-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:14:25.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-d6s4w" for this suite.
Sep  6 22:14:31.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:14:32.024: INFO: namespace: e2e-tests-downward-api-d6s4w, resource: bindings, ignored listing per whitelist
Sep  6 22:14:32.053: INFO: namespace e2e-tests-downward-api-d6s4w deletion completed in 6.442119848s

• [SLOW TEST:19.030 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:14:32.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:15:32.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-nbx6s" for this suite.
Sep  6 22:15:54.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:15:54.279: INFO: namespace: e2e-tests-container-probe-nbx6s, resource: bindings, ignored listing per whitelist
Sep  6 22:15:54.332: INFO: namespace e2e-tests-container-probe-nbx6s deletion completed in 22.115385481s

• [SLOW TEST:82.280 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  6 22:15:54.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Sep  6 22:15:54.449: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep  6 22:15:54.456: INFO: Waiting for terminating namespaces to be deleted...
Sep  6 22:15:54.458: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Sep  6 22:15:54.463: INFO: kube-proxy-t9g4m from kube-system started at 2020-09-05 13:37:20 +0000 UTC (1 container statuses recorded)
Sep  6 22:15:54.463: INFO: 	Container kube-proxy ready: true, restart count 0
Sep  6 22:15:54.463: INFO: kindnet-4qkqp from kube-system started at 2020-09-05 13:37:22 +0000 UTC (1 container statuses recorded)
Sep  6 22:15:54.463: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep  6 22:15:54.463: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Sep  6 22:15:54.468: INFO: kube-proxy-vl5mq from kube-system started at 2020-09-05 13:37:20 +0000 UTC (1 container statuses recorded)
Sep  6 22:15:54.468: INFO: 	Container kube-proxy ready: true, restart count 0
Sep  6 22:15:54.468: INFO: kindnet-z7tw7 from kube-system started at 2020-09-05 13:37:22 +0000 UTC (1 container statuses recorded)
Sep  6 22:15:54.468: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-worker
STEP: verifying the node has the label node hunter-worker2
Sep  6 22:15:54.549: INFO: Pod kindnet-4qkqp requesting resource cpu=100m on Node hunter-worker
Sep  6 22:15:54.549: INFO: Pod kindnet-z7tw7 requesting resource cpu=100m on Node hunter-worker2
Sep  6 22:15:54.549: INFO: Pod kube-proxy-t9g4m requesting resource cpu=0m on Node hunter-worker
Sep  6 22:15:54.549: INFO: Pod kube-proxy-vl5mq requesting resource cpu=0m on Node hunter-worker2
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-87f92720-f08e-11ea-b72c-0242ac110008.1632519d83485205], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-2vb6t/filler-pod-87f92720-f08e-11ea-b72c-0242ac110008 to hunter-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-87f92720-f08e-11ea-b72c-0242ac110008.1632519dfeee198e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-87f92720-f08e-11ea-b72c-0242ac110008.1632519e37fb96f6], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-87f92720-f08e-11ea-b72c-0242ac110008.1632519e4792a273], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-87fef447-f08e-11ea-b72c-0242ac110008.1632519d83eb4096], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-2vb6t/filler-pod-87fef447-f08e-11ea-b72c-0242ac110008 to hunter-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-87fef447-f08e-11ea-b72c-0242ac110008.1632519dc9bd36b1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-87fef447-f08e-11ea-b72c-0242ac110008.1632519e154098ae], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-87fef447-f08e-11ea-b72c-0242ac110008.1632519e25f26ecc], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1632519eee2784f4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node hunter-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node hunter-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  6 22:16:01.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-2vb6t" for this suite.
Sep  6 22:16:07.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  6 22:16:07.857: INFO: namespace: e2e-tests-sched-pred-2vb6t, resource: bindings, ignored listing per whitelist
Sep  6 22:16:07.906: INFO: namespace e2e-tests-sched-pred-2vb6t deletion completed in 6.088760762s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:13.574 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSep  6 22:16:07.907: INFO: Running AfterSuite actions on all nodes
Sep  6 22:16:07.907: INFO: Running AfterSuite actions on node 1
Sep  6 22:16:07.907: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 7375.284 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS