I0219 10:47:27.114654 8 e2e.go:224] Starting e2e run "37b67dd7-5305-11ea-a0a3-0242ac110008" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582109246 - Will randomize all specs Will run 201 of 2164 specs Feb 19 10:47:27.555: INFO: >>> kubeConfig: /root/.kube/config Feb 19 10:47:27.563: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 19 10:47:27.584: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 19 10:47:27.628: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 19 10:47:27.628: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 19 10:47:27.628: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 19 10:47:27.640: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 19 10:47:27.640: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 19 10:47:27.640: INFO: e2e test version: v1.13.12 Feb 19 10:47:27.642: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:47:27.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Feb 19 10:47:28.035: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 19 10:47:28.051: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38c1d439-5305-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-xc6j5" to be "success or failure" Feb 19 10:47:28.060: INFO: Pod "downwardapi-volume-38c1d439-5305-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.152581ms Feb 19 10:47:30.469: INFO: Pod "downwardapi-volume-38c1d439-5305-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.41814736s Feb 19 10:47:32.495: INFO: Pod "downwardapi-volume-38c1d439-5305-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.444431703s Feb 19 10:47:35.589: INFO: Pod "downwardapi-volume-38c1d439-5305-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.538592281s Feb 19 10:47:37.702: INFO: Pod "downwardapi-volume-38c1d439-5305-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.651021405s Feb 19 10:47:39.717: INFO: Pod "downwardapi-volume-38c1d439-5305-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.665818198s STEP: Saw pod success Feb 19 10:47:39.717: INFO: Pod "downwardapi-volume-38c1d439-5305-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 10:47:39.726: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-38c1d439-5305-11ea-a0a3-0242ac110008 container client-container: STEP: delete the pod Feb 19 10:47:40.921: INFO: Waiting for pod downwardapi-volume-38c1d439-5305-11ea-a0a3-0242ac110008 to disappear Feb 19 10:47:40.930: INFO: Pod downwardapi-volume-38c1d439-5305-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:47:40.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xc6j5" for this suite. Feb 19 10:47:47.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:47:47.206: INFO: namespace: e2e-tests-downward-api-xc6j5, resource: bindings, ignored listing per whitelist Feb 19 10:47:47.229: INFO: namespace e2e-tests-downward-api-xc6j5 deletion completed in 6.291330985s • [SLOW TEST:19.587 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:47:47.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 19 10:48:09.797: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 19 10:48:09.839: INFO: Pod pod-with-prestop-http-hook still exists Feb 19 10:48:11.840: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 19 10:48:11.911: INFO: Pod pod-with-prestop-http-hook still exists Feb 19 10:48:13.840: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 19 10:48:14.008: INFO: Pod pod-with-prestop-http-hook still exists Feb 19 10:48:15.840: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 19 10:48:15.875: INFO: Pod pod-with-prestop-http-hook still exists Feb 19 10:48:17.840: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 19 10:48:17.871: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:48:17.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-l9lm6" for this suite. Feb 19 10:48:42.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:48:42.151: INFO: namespace: e2e-tests-container-lifecycle-hook-l9lm6, resource: bindings, ignored listing per whitelist Feb 19 10:48:42.269: INFO: namespace e2e-tests-container-lifecycle-hook-l9lm6 deletion completed in 24.288889514s • [SLOW TEST:55.040 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:48:42.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:48:52.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-f7plp" for this suite. Feb 19 10:49:34.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:49:34.924: INFO: namespace: e2e-tests-kubelet-test-f7plp, resource: bindings, ignored listing per whitelist Feb 19 10:49:34.973: INFO: namespace e2e-tests-kubelet-test-f7plp deletion completed in 42.308827771s • [SLOW TEST:52.704 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:49:34.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0219 10:49:47.965544 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 19 10:49:47.965: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:49:47.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wxvnb" for this suite. Feb 19 10:50:15.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:50:15.336: INFO: namespace: e2e-tests-gc-wxvnb, resource: bindings, ignored listing per whitelist Feb 19 10:50:15.408: INFO: namespace e2e-tests-gc-wxvnb deletion completed in 27.435107709s • [SLOW TEST:40.434 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:50:15.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-9ca83f6c-5305-11ea-a0a3-0242ac110008 Feb 19 10:50:15.664: INFO: Pod name my-hostname-basic-9ca83f6c-5305-11ea-a0a3-0242ac110008: Found 0 pods out of 1 Feb 19 10:50:20.691: INFO: Pod name my-hostname-basic-9ca83f6c-5305-11ea-a0a3-0242ac110008: Found 1 pods out of 1 Feb 19 10:50:20.691: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-9ca83f6c-5305-11ea-a0a3-0242ac110008" are running Feb 19 10:50:28.744: INFO: Pod "my-hostname-basic-9ca83f6c-5305-11ea-a0a3-0242ac110008-2gm5s" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-19 10:50:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-19 10:50:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9ca83f6c-5305-11ea-a0a3-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-19 10:50:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9ca83f6c-5305-11ea-a0a3-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-19 10:50:15 +0000 UTC Reason: Message:}]) Feb 19 10:50:28.744: INFO: Trying to dial the pod Feb 19 10:50:33.794: INFO: Controller my-hostname-basic-9ca83f6c-5305-11ea-a0a3-0242ac110008: Got expected result from replica 1 [my-hostname-basic-9ca83f6c-5305-11ea-a0a3-0242ac110008-2gm5s]: "my-hostname-basic-9ca83f6c-5305-11ea-a0a3-0242ac110008-2gm5s", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:50:33.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-mqf5c" for this suite. Feb 19 10:50:41.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:50:41.961: INFO: namespace: e2e-tests-replication-controller-mqf5c, resource: bindings, ignored listing per whitelist Feb 19 10:50:42.027: INFO: namespace e2e-tests-replication-controller-mqf5c deletion completed in 8.215933059s • [SLOW TEST:26.619 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:50:42.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Feb 19 10:50:43.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:50:46.847: INFO: stderr: "" Feb 19 10:50:46.847: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 19 10:50:46.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:50:47.309: INFO: stderr: "" Feb 19 10:50:47.309: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Feb 19 10:50:52.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:50:52.519: INFO: stderr: "" Feb 19 10:50:52.519: INFO: stdout: "update-demo-nautilus-cqwpw update-demo-nautilus-q5tmk " Feb 19 10:50:52.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqwpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:50:52.627: INFO: stderr: "" Feb 19 10:50:52.627: INFO: stdout: "" Feb 19 10:50:52.627: INFO: update-demo-nautilus-cqwpw is created but not running Feb 19 10:50:57.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:50:57.796: INFO: stderr: "" Feb 19 10:50:57.796: INFO: stdout: "update-demo-nautilus-cqwpw update-demo-nautilus-q5tmk " Feb 19 10:50:57.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqwpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:50:57.922: INFO: stderr: "" Feb 19 10:50:57.922: INFO: stdout: "" Feb 19 10:50:57.922: INFO: update-demo-nautilus-cqwpw is created but not running Feb 19 10:51:02.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:51:03.138: INFO: stderr: "" Feb 19 10:51:03.138: INFO: stdout: "update-demo-nautilus-cqwpw update-demo-nautilus-q5tmk " Feb 19 10:51:03.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqwpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:51:03.231: INFO: stderr: "" Feb 19 10:51:03.231: INFO: stdout: "true" Feb 19 10:51:03.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqwpw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:51:03.347: INFO: stderr: "" Feb 19 10:51:03.347: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 19 10:51:03.347: INFO: validating pod update-demo-nautilus-cqwpw Feb 19 10:51:03.567: INFO: got data: { "image": "nautilus.jpg" } Feb 19 10:51:03.568: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 19 10:51:03.568: INFO: update-demo-nautilus-cqwpw is verified up and running Feb 19 10:51:03.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q5tmk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:51:03.755: INFO: stderr: "" Feb 19 10:51:03.755: INFO: stdout: "true" Feb 19 10:51:03.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q5tmk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:51:03.882: INFO: stderr: "" Feb 19 10:51:03.882: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 19 10:51:03.883: INFO: validating pod update-demo-nautilus-q5tmk Feb 19 10:51:03.903: INFO: got data: { "image": "nautilus.jpg" } Feb 19 10:51:03.903: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 19 10:51:03.904: INFO: update-demo-nautilus-q5tmk is verified up and running STEP: rolling-update to new replication controller Feb 19 10:51:03.912: INFO: scanned /root for discovery docs: Feb 19 10:51:03.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:51:39.705: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 19 10:51:39.705: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 19 10:51:39.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:51:39.911: INFO: stderr: "" Feb 19 10:51:39.911: INFO: stdout: "update-demo-kitten-gdz78 update-demo-kitten-mx4qb update-demo-nautilus-cqwpw " STEP: Replicas for name=update-demo: expected=2 actual=3 Feb 19 10:51:44.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:51:45.075: INFO: stderr: "" Feb 19 10:51:45.076: INFO: stdout: "update-demo-kitten-gdz78 update-demo-kitten-mx4qb " Feb 19 10:51:45.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gdz78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:51:45.204: INFO: stderr: "" Feb 19 10:51:45.204: INFO: stdout: "true" Feb 19 10:51:45.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gdz78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:51:45.285: INFO: stderr: "" Feb 19 10:51:45.285: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 19 10:51:45.285: INFO: validating pod update-demo-kitten-gdz78 Feb 19 10:51:45.307: INFO: got data: { "image": "kitten.jpg" } Feb 19 10:51:45.308: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 19 10:51:45.308: INFO: update-demo-kitten-gdz78 is verified up and running Feb 19 10:51:45.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mx4qb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:51:45.397: INFO: stderr: "" Feb 19 10:51:45.397: INFO: stdout: "true" Feb 19 10:51:45.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mx4qb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sh6dn' Feb 19 10:51:45.514: INFO: stderr: "" Feb 19 10:51:45.514: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 19 10:51:45.514: INFO: validating pod update-demo-kitten-mx4qb Feb 19 10:51:45.525: INFO: got data: { "image": "kitten.jpg" } Feb 19 10:51:45.525: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 19 10:51:45.525: INFO: update-demo-kitten-mx4qb is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:51:45.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sh6dn" for this suite. Feb 19 10:52:09.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:52:09.745: INFO: namespace: e2e-tests-kubectl-sh6dn, resource: bindings, ignored listing per whitelist Feb 19 10:52:09.751: INFO: namespace e2e-tests-kubectl-sh6dn deletion completed in 24.22105009s • [SLOW TEST:87.724 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:52:09.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-vxvq5 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-vxvq5 STEP: Deleting pre-stop pod Feb 19 10:52:35.176: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:52:35.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-vxvq5" for this suite. Feb 19 10:53:15.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:53:15.383: INFO: namespace: e2e-tests-prestop-vxvq5, resource: bindings, ignored listing per whitelist Feb 19 10:53:15.579: INFO: namespace e2e-tests-prestop-vxvq5 deletion completed in 40.341513448s • [SLOW TEST:65.827 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:53:15.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Feb 19 10:53:26.116: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:54:08.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-qtnnw" for this suite. Feb 19 10:54:16.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:54:16.597: INFO: namespace: e2e-tests-namespaces-qtnnw, resource: bindings, ignored listing per whitelist Feb 19 10:54:16.701: INFO: namespace e2e-tests-namespaces-qtnnw deletion completed in 8.311513337s STEP: Destroying namespace "e2e-tests-nsdeletetest-jdgwr" for this suite. Feb 19 10:54:16.703: INFO: Namespace e2e-tests-nsdeletetest-jdgwr was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-z68dz" for this suite. Feb 19 10:54:22.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:54:22.863: INFO: namespace: e2e-tests-nsdeletetest-z68dz, resource: bindings, ignored listing per whitelist Feb 19 10:54:22.883: INFO: namespace e2e-tests-nsdeletetest-z68dz deletion completed in 6.179993539s • [SLOW TEST:67.304 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:54:22.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-302066d9-5306-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 19 10:54:23.091: INFO: Waiting up to 5m0s for pod "pod-configmaps-3023a002-5306-11ea-a0a3-0242ac110008" in namespace "e2e-tests-configmap-8djll" to be "success or failure" Feb 19 10:54:23.101: INFO: Pod "pod-configmaps-3023a002-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.429733ms Feb 19 10:54:25.371: INFO: Pod "pod-configmaps-3023a002-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278995526s Feb 19 10:54:27.394: INFO: Pod "pod-configmaps-3023a002-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302874514s Feb 19 10:54:29.485: INFO: Pod "pod-configmaps-3023a002-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393823468s Feb 19 10:54:31.523: INFO: Pod "pod-configmaps-3023a002-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.431417232s Feb 19 10:54:33.535: INFO: Pod "pod-configmaps-3023a002-5306-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.443834753s STEP: Saw pod success Feb 19 10:54:33.535: INFO: Pod "pod-configmaps-3023a002-5306-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 10:54:33.540: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3023a002-5306-11ea-a0a3-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 19 10:54:33.601: INFO: Waiting for pod pod-configmaps-3023a002-5306-11ea-a0a3-0242ac110008 to disappear Feb 19 10:54:33.669: INFO: Pod pod-configmaps-3023a002-5306-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:54:33.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8djll" for this suite. Feb 19 10:54:39.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:54:39.921: INFO: namespace: e2e-tests-configmap-8djll, resource: bindings, ignored listing per whitelist Feb 19 10:54:39.942: INFO: namespace e2e-tests-configmap-8djll deletion completed in 6.259913704s • [SLOW TEST:17.058 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:54:39.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 19 10:54:50.807: INFO: Successfully updated pod "labelsupdate3a527770-5306-11ea-a0a3-0242ac110008" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:54:53.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vtd6m" for this suite. Feb 19 10:55:15.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:55:15.263: INFO: namespace: e2e-tests-projected-vtd6m, resource: bindings, ignored listing per whitelist Feb 19 10:55:15.263: INFO: namespace e2e-tests-projected-vtd6m deletion completed in 22.252524348s • [SLOW TEST:35.321 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:55:15.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 19 10:55:15.468: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f5ae0ed-5306-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-j2wng" to be "success or failure" Feb 19 10:55:15.479: INFO: Pod "downwardapi-volume-4f5ae0ed-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.092687ms Feb 19 10:55:17.514: INFO: Pod "downwardapi-volume-4f5ae0ed-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046356002s Feb 19 10:55:19.523: INFO: Pod "downwardapi-volume-4f5ae0ed-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054936169s Feb 19 10:55:21.966: INFO: Pod "downwardapi-volume-4f5ae0ed-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.498037124s Feb 19 10:55:23.999: INFO: Pod "downwardapi-volume-4f5ae0ed-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531528211s Feb 19 10:55:26.516: INFO: Pod "downwardapi-volume-4f5ae0ed-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.048051495s Feb 19 10:55:28.553: INFO: Pod "downwardapi-volume-4f5ae0ed-5306-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.084966586s STEP: Saw pod success Feb 19 10:55:28.553: INFO: Pod "downwardapi-volume-4f5ae0ed-5306-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 10:55:28.569: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4f5ae0ed-5306-11ea-a0a3-0242ac110008 container client-container: STEP: delete the pod Feb 19 10:55:30.984: INFO: Waiting for pod downwardapi-volume-4f5ae0ed-5306-11ea-a0a3-0242ac110008 to disappear Feb 19 10:55:30.991: INFO: Pod downwardapi-volume-4f5ae0ed-5306-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:55:30.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-j2wng" for this suite. Feb 19 10:55:39.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:55:39.150: INFO: namespace: e2e-tests-downward-api-j2wng, resource: bindings, ignored listing per whitelist Feb 19 10:55:39.156: INFO: namespace e2e-tests-downward-api-j2wng deletion completed in 8.155206653s • [SLOW TEST:23.892 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:55:39.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 19 10:55:39.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-j6d4c' Feb 19 10:55:39.391: INFO: stderr: "" Feb 19 10:55:39.391: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 19 10:55:49.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-j6d4c -o json' Feb 19 10:55:49.600: INFO: stderr: "" Feb 19 10:55:49.601: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-19T10:55:39Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-j6d4c\",\n \"resourceVersion\": \"22187965\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-j6d4c/pods/e2e-test-nginx-pod\",\n \"uid\": \"5d9da9dd-5306-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-pmnp2\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-pmnp2\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-pmnp2\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-19T10:55:39Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-19T10:55:48Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-19T10:55:48Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-19T10:55:39Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://f82b9f88e2e0500a3a09e55fa2d13d00c6f142a1612cad90f09a291c86d766fe\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-19T10:55:47Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-19T10:55:39Z\"\n }\n}\n" STEP: replace the image in the pod Feb 19 10:55:49.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-j6d4c' Feb 19 10:55:49.968: INFO: stderr: "" Feb 19 10:55:49.968: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Feb 19 10:55:50.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-j6d4c' Feb 19 10:55:59.522: INFO: stderr: "" Feb 19 10:55:59.523: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:55:59.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j6d4c" for this suite. Feb 19 10:56:05.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:56:05.624: INFO: namespace: e2e-tests-kubectl-j6d4c, resource: bindings, ignored listing per whitelist Feb 19 10:56:05.700: INFO: namespace e2e-tests-kubectl-j6d4c deletion completed in 6.160852361s • [SLOW TEST:26.544 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:56:05.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 19 10:56:05.912: INFO: Waiting up to 5m0s for pod "pod-6d6b557c-5306-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-zzl4q" to be "success or failure" Feb 19 10:56:06.001: INFO: Pod "pod-6d6b557c-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 89.199966ms Feb 19 10:56:08.019: INFO: Pod "pod-6d6b557c-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107460298s Feb 19 10:56:10.049: INFO: Pod "pod-6d6b557c-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137409727s Feb 19 10:56:12.419: INFO: Pod "pod-6d6b557c-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.506857653s Feb 19 10:56:14.868: INFO: Pod "pod-6d6b557c-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.955774311s Feb 19 10:56:16.882: INFO: Pod "pod-6d6b557c-5306-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.970188698s STEP: Saw pod success Feb 19 10:56:16.882: INFO: Pod "pod-6d6b557c-5306-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 10:56:16.886: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6d6b557c-5306-11ea-a0a3-0242ac110008 container test-container: STEP: delete the pod Feb 19 10:56:17.090: INFO: Waiting for pod pod-6d6b557c-5306-11ea-a0a3-0242ac110008 to disappear Feb 19 10:56:17.151: INFO: Pod pod-6d6b557c-5306-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:56:17.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zzl4q" for this suite. Feb 19 10:56:23.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:56:23.468: INFO: namespace: e2e-tests-emptydir-zzl4q, resource: bindings, ignored listing per whitelist Feb 19 10:56:23.485: INFO: namespace e2e-tests-emptydir-zzl4q deletion completed in 6.318465551s • [SLOW TEST:17.784 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:56:23.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Feb 19 10:56:23.700: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix497724576/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:56:23.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hz4wm" for this suite. Feb 19 10:56:29.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:56:30.040: INFO: namespace: e2e-tests-kubectl-hz4wm, resource: bindings, ignored listing per whitelist Feb 19 10:56:30.077: INFO: namespace e2e-tests-kubectl-hz4wm deletion completed in 6.211029636s • [SLOW TEST:6.591 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:56:30.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0219 10:57:10.357038 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 19 10:57:10.357: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:57:10.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-n75wz" for this suite. Feb 19 10:57:36.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:57:36.600: INFO: namespace: e2e-tests-gc-n75wz, resource: bindings, ignored listing per whitelist Feb 19 10:57:36.811: INFO: namespace e2e-tests-gc-n75wz deletion completed in 26.440757055s • [SLOW TEST:66.734 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:57:36.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-a3c7f69b-5306-11ea-a0a3-0242ac110008 STEP: Creating configMap with name cm-test-opt-upd-a3c7f702-5306-11ea-a0a3-0242ac110008 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a3c7f69b-5306-11ea-a0a3-0242ac110008 STEP: Updating configmap cm-test-opt-upd-a3c7f702-5306-11ea-a0a3-0242ac110008 STEP: Creating configMap with name cm-test-opt-create-a3c7f727-5306-11ea-a0a3-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:57:57.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dn9ct" for this suite. Feb 19 10:58:21.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:58:21.996: INFO: namespace: e2e-tests-projected-dn9ct, resource: bindings, ignored listing per whitelist Feb 19 10:58:22.098: INFO: namespace e2e-tests-projected-dn9ct deletion completed in 24.325606621s • [SLOW TEST:45.287 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:58:22.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 19 10:58:32.670: INFO: Waiting up to 5m0s for pod "client-envvars-c4e47ffa-5306-11ea-a0a3-0242ac110008" in namespace "e2e-tests-pods-qzb6f" to be "success or failure" Feb 19 10:58:32.908: INFO: Pod "client-envvars-c4e47ffa-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 238.008415ms Feb 19 10:58:34.919: INFO: Pod "client-envvars-c4e47ffa-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248944567s Feb 19 10:58:36.932: INFO: Pod "client-envvars-c4e47ffa-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261524798s Feb 19 10:58:39.454: INFO: Pod "client-envvars-c4e47ffa-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.784260067s Feb 19 10:58:41.473: INFO: Pod "client-envvars-c4e47ffa-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.802514255s Feb 19 10:58:43.483: INFO: Pod "client-envvars-c4e47ffa-5306-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.81263208s STEP: Saw pod success Feb 19 10:58:43.483: INFO: Pod "client-envvars-c4e47ffa-5306-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 10:58:43.487: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-c4e47ffa-5306-11ea-a0a3-0242ac110008 container env3cont: STEP: delete the pod Feb 19 10:58:44.164: INFO: Waiting for pod client-envvars-c4e47ffa-5306-11ea-a0a3-0242ac110008 to disappear Feb 19 10:58:44.399: INFO: Pod client-envvars-c4e47ffa-5306-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:58:44.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-qzb6f" for this suite. Feb 19 10:59:38.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:59:38.559: INFO: namespace: e2e-tests-pods-qzb6f, resource: bindings, ignored listing per whitelist Feb 19 10:59:38.659: INFO: namespace e2e-tests-pods-qzb6f deletion completed in 54.24649406s • [SLOW TEST:76.558 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:59:38.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-ec76260c-5306-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume secrets Feb 19 10:59:39.044: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ec76c7ce-5306-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-72lx8" to be "success or failure" Feb 19 10:59:39.065: INFO: Pod "pod-projected-secrets-ec76c7ce-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.144674ms Feb 19 10:59:41.078: INFO: Pod "pod-projected-secrets-ec76c7ce-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033891392s Feb 19 10:59:43.099: INFO: Pod "pod-projected-secrets-ec76c7ce-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054534052s Feb 19 10:59:45.112: INFO: Pod "pod-projected-secrets-ec76c7ce-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068450476s Feb 19 10:59:47.122: INFO: Pod "pod-projected-secrets-ec76c7ce-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078447165s Feb 19 10:59:49.150: INFO: Pod "pod-projected-secrets-ec76c7ce-5306-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106341085s STEP: Saw pod success Feb 19 10:59:49.150: INFO: Pod "pod-projected-secrets-ec76c7ce-5306-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 10:59:49.164: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-ec76c7ce-5306-11ea-a0a3-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 19 10:59:49.222: INFO: Waiting for pod pod-projected-secrets-ec76c7ce-5306-11ea-a0a3-0242ac110008 to disappear Feb 19 10:59:49.232: INFO: Pod pod-projected-secrets-ec76c7ce-5306-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 10:59:49.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-72lx8" for this suite. Feb 19 10:59:55.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 10:59:55.478: INFO: namespace: e2e-tests-projected-72lx8, resource: bindings, ignored listing per whitelist Feb 19 10:59:55.585: INFO: namespace e2e-tests-projected-72lx8 deletion completed in 6.345755396s • [SLOW TEST:16.926 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 10:59:55.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Feb 19 10:59:55.775: INFO: Waiting up to 5m0s for pod "client-containers-f66ff309-5306-11ea-a0a3-0242ac110008" in namespace "e2e-tests-containers-f797j" to be "success or failure" Feb 19 10:59:55.800: INFO: Pod "client-containers-f66ff309-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 24.594134ms Feb 19 10:59:58.018: INFO: Pod "client-containers-f66ff309-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242304968s Feb 19 11:00:00.030: INFO: Pod "client-containers-f66ff309-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254241892s Feb 19 11:00:02.073: INFO: Pod "client-containers-f66ff309-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.297858858s Feb 19 11:00:04.094: INFO: Pod "client-containers-f66ff309-5306-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318851226s Feb 19 11:00:06.109: INFO: Pod "client-containers-f66ff309-5306-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.334165449s STEP: Saw pod success Feb 19 11:00:06.110: INFO: Pod "client-containers-f66ff309-5306-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:00:06.122: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-f66ff309-5306-11ea-a0a3-0242ac110008 container test-container: STEP: delete the pod Feb 19 11:00:06.782: INFO: Waiting for pod client-containers-f66ff309-5306-11ea-a0a3-0242ac110008 to disappear Feb 19 11:00:06.799: INFO: Pod client-containers-f66ff309-5306-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:00:06.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-f797j" for this suite. Feb 19 11:00:12.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:00:12.968: INFO: namespace: e2e-tests-containers-f797j, resource: bindings, ignored listing per whitelist Feb 19 11:00:13.081: INFO: namespace e2e-tests-containers-f797j deletion completed in 6.27427828s • [SLOW TEST:17.495 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:00:13.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-blff STEP: Creating a pod to test atomic-volume-subpath Feb 19 11:00:13.414: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-blff" in namespace "e2e-tests-subpath-4gpx4" to be "success or failure" Feb 19 11:00:13.432: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Pending", Reason="", readiness=false. Elapsed: 18.621504ms Feb 19 11:00:15.649: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235356477s Feb 19 11:00:17.719: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30539661s Feb 19 11:00:20.043: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.629523441s Feb 19 11:00:22.319: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.904772592s Feb 19 11:00:24.330: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Pending", Reason="", readiness=false. Elapsed: 10.916057169s Feb 19 11:00:26.342: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Pending", Reason="", readiness=false. Elapsed: 12.927890697s Feb 19 11:00:28.419: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Running", Reason="", readiness=true. Elapsed: 15.005384661s Feb 19 11:00:30.429: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Running", Reason="", readiness=false. Elapsed: 17.015170417s Feb 19 11:00:32.464: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Running", Reason="", readiness=false. Elapsed: 19.049856583s Feb 19 11:00:34.496: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Running", Reason="", readiness=false. Elapsed: 21.081731717s Feb 19 11:00:36.531: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Running", Reason="", readiness=false. Elapsed: 23.11694663s Feb 19 11:00:38.543: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Running", Reason="", readiness=false. Elapsed: 25.128952878s Feb 19 11:00:40.573: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Running", Reason="", readiness=false. Elapsed: 27.159435596s Feb 19 11:00:42.608: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Running", Reason="", readiness=false. Elapsed: 29.194239428s Feb 19 11:00:44.627: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Running", Reason="", readiness=false. Elapsed: 31.213623654s Feb 19 11:00:46.648: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Running", Reason="", readiness=false. Elapsed: 33.234106625s Feb 19 11:00:49.049: INFO: Pod "pod-subpath-test-downwardapi-blff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.634951422s STEP: Saw pod success Feb 19 11:00:49.049: INFO: Pod "pod-subpath-test-downwardapi-blff" satisfied condition "success or failure" Feb 19 11:00:49.063: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-blff container test-container-subpath-downwardapi-blff: STEP: delete the pod Feb 19 11:00:49.669: INFO: Waiting for pod pod-subpath-test-downwardapi-blff to disappear Feb 19 11:00:49.685: INFO: Pod pod-subpath-test-downwardapi-blff no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-blff Feb 19 11:00:49.685: INFO: Deleting pod "pod-subpath-test-downwardapi-blff" in namespace "e2e-tests-subpath-4gpx4" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:00:49.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-4gpx4" for this suite. Feb 19 11:00:55.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:00:55.819: INFO: namespace: e2e-tests-subpath-4gpx4, resource: bindings, ignored listing per whitelist Feb 19 11:00:55.883: INFO: namespace e2e-tests-subpath-4gpx4 deletion completed in 6.184121232s • [SLOW TEST:42.802 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:00:55.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 19 11:00:56.108: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:01:04.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-xbq9g" for this suite. Feb 19 11:01:46.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:01:46.559: INFO: namespace: e2e-tests-pods-xbq9g, resource: bindings, ignored listing per whitelist Feb 19 11:01:46.768: INFO: namespace e2e-tests-pods-xbq9g deletion completed in 42.320801642s • [SLOW TEST:50.884 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:01:46.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 19 11:01:47.005: INFO: Waiting up to 5m0s for pod "pod-38bb4e74-5307-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-ndxxc" to be "success or failure" Feb 19 11:01:47.016: INFO: Pod "pod-38bb4e74-5307-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.423372ms Feb 19 11:01:49.303: INFO: Pod "pod-38bb4e74-5307-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297980484s Feb 19 11:01:51.328: INFO: Pod "pod-38bb4e74-5307-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323290546s Feb 19 11:01:53.639: INFO: Pod "pod-38bb4e74-5307-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.634488172s Feb 19 11:01:55.649: INFO: Pod "pod-38bb4e74-5307-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.644846018s Feb 19 11:01:58.045: INFO: Pod "pod-38bb4e74-5307-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.040297088s STEP: Saw pod success Feb 19 11:01:58.045: INFO: Pod "pod-38bb4e74-5307-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:01:58.270: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-38bb4e74-5307-11ea-a0a3-0242ac110008 container test-container: STEP: delete the pod Feb 19 11:01:58.519: INFO: Waiting for pod pod-38bb4e74-5307-11ea-a0a3-0242ac110008 to disappear Feb 19 11:01:58.531: INFO: Pod pod-38bb4e74-5307-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:01:58.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ndxxc" for this suite. Feb 19 11:02:04.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:02:04.677: INFO: namespace: e2e-tests-emptydir-ndxxc, resource: bindings, ignored listing per whitelist Feb 19 11:02:04.728: INFO: namespace e2e-tests-emptydir-ndxxc deletion completed in 6.184159555s • [SLOW TEST:17.959 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:02:04.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-pwb7t STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 19 11:02:04.833: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 19 11:02:41.090: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-pwb7t PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 19 11:02:41.090: INFO: >>> kubeConfig: /root/.kube/config I0219 11:02:41.182184 8 log.go:172] (0xc00071d8c0) (0xc0015bcfa0) Create stream I0219 11:02:41.182280 8 log.go:172] (0xc00071d8c0) (0xc0015bcfa0) Stream added, broadcasting: 1 I0219 11:02:41.189969 8 log.go:172] (0xc00071d8c0) Reply frame received for 1 I0219 11:02:41.190029 8 log.go:172] (0xc00071d8c0) (0xc0013fefa0) Create stream I0219 11:02:41.190041 8 log.go:172] (0xc00071d8c0) (0xc0013fefa0) Stream added, broadcasting: 3 I0219 11:02:41.191572 8 log.go:172] (0xc00071d8c0) Reply frame received for 3 I0219 11:02:41.191601 8 log.go:172] (0xc00071d8c0) (0xc0013ff040) Create stream I0219 11:02:41.191611 8 log.go:172] (0xc00071d8c0) (0xc0013ff040) Stream added, broadcasting: 5 I0219 11:02:41.193133 8 log.go:172] (0xc00071d8c0) Reply frame received for 5 I0219 11:02:41.396790 8 log.go:172] (0xc00071d8c0) Data frame received for 3 I0219 11:02:41.396851 8 log.go:172] (0xc0013fefa0) (3) Data frame handling I0219 11:02:41.396881 8 log.go:172] (0xc0013fefa0) (3) Data frame sent I0219 11:02:41.539151 8 log.go:172] (0xc00071d8c0) Data frame received for 1 I0219 11:02:41.539694 8 log.go:172] (0xc00071d8c0) (0xc0013fefa0) Stream removed, broadcasting: 3 I0219 11:02:41.539864 8 log.go:172] (0xc0015bcfa0) (1) Data frame handling I0219 11:02:41.539896 8 log.go:172] (0xc0015bcfa0) (1) Data frame sent I0219 11:02:41.539957 8 log.go:172] (0xc00071d8c0) (0xc0013ff040) Stream removed, broadcasting: 5 I0219 11:02:41.540013 8 log.go:172] (0xc00071d8c0) (0xc0015bcfa0) Stream removed, broadcasting: 1 I0219 11:02:41.540070 8 log.go:172] (0xc00071d8c0) Go away received I0219 11:02:41.540671 8 log.go:172] (0xc00071d8c0) (0xc0015bcfa0) Stream removed, broadcasting: 1 I0219 11:02:41.540693 8 log.go:172] (0xc00071d8c0) (0xc0013fefa0) Stream removed, broadcasting: 3 I0219 11:02:41.540709 8 log.go:172] (0xc00071d8c0) (0xc0013ff040) Stream removed, broadcasting: 5 Feb 19 11:02:41.540: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:02:41.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-pwb7t" for this suite. Feb 19 11:03:07.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:03:07.776: INFO: namespace: e2e-tests-pod-network-test-pwb7t, resource: bindings, ignored listing per whitelist Feb 19 11:03:07.817: INFO: namespace e2e-tests-pod-network-test-pwb7t deletion completed in 26.201302269s • [SLOW TEST:63.089 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:03:07.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:04:08.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-8jnqp" for this suite. Feb 19 11:04:32.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:04:32.146: INFO: namespace: e2e-tests-container-probe-8jnqp, resource: bindings, ignored listing per whitelist Feb 19 11:04:32.247: INFO: namespace e2e-tests-container-probe-8jnqp deletion completed in 24.195636356s • [SLOW TEST:84.430 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:04:32.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:04:45.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-875mh" for this suite. Feb 19 11:05:09.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:05:10.040: INFO: namespace: e2e-tests-replication-controller-875mh, resource: bindings, ignored listing per whitelist Feb 19 11:05:10.278: INFO: namespace e2e-tests-replication-controller-875mh deletion completed in 24.740685278s • [SLOW TEST:38.031 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:05:10.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 19 11:05:10.565: INFO: namespace e2e-tests-kubectl-6hxqt Feb 19 11:05:10.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6hxqt' Feb 19 11:05:12.673: INFO: stderr: "" Feb 19 11:05:12.673: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 19 11:05:13.693: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:05:13.693: INFO: Found 0 / 1 Feb 19 11:05:15.184: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:05:15.184: INFO: Found 0 / 1 Feb 19 11:05:15.701: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:05:15.701: INFO: Found 0 / 1 Feb 19 11:05:16.684: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:05:16.684: INFO: Found 0 / 1 Feb 19 11:05:18.155: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:05:18.156: INFO: Found 0 / 1 Feb 19 11:05:18.684: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:05:18.684: INFO: Found 0 / 1 Feb 19 11:05:20.226: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:05:20.226: INFO: Found 0 / 1 Feb 19 11:05:20.686: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:05:20.686: INFO: Found 0 / 1 Feb 19 11:05:21.691: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:05:21.691: INFO: Found 0 / 1 Feb 19 11:05:22.764: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:05:22.764: INFO: Found 0 / 1 Feb 19 11:05:23.694: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:05:23.695: INFO: Found 1 / 1 Feb 19 11:05:23.695: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 19 11:05:23.701: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:05:23.701: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 19 11:05:23.701: INFO: wait on redis-master startup in e2e-tests-kubectl-6hxqt Feb 19 11:05:23.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jt7mp redis-master --namespace=e2e-tests-kubectl-6hxqt' Feb 19 11:05:23.957: INFO: stderr: "" Feb 19 11:05:23.958: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 19 Feb 11:05:21.418 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Feb 11:05:21.418 # Server started, Redis version 3.2.12\n1:M 19 Feb 11:05:21.419 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Feb 11:05:21.419 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 19 11:05:23.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-6hxqt' Feb 19 11:05:24.213: INFO: stderr: "" Feb 19 11:05:24.213: INFO: stdout: "service/rm2 exposed\n" Feb 19 11:05:24.269: INFO: Service rm2 in namespace e2e-tests-kubectl-6hxqt found. STEP: exposing service Feb 19 11:05:26.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-6hxqt' Feb 19 11:05:26.630: INFO: stderr: "" Feb 19 11:05:26.631: INFO: stdout: "service/rm3 exposed\n" Feb 19 11:05:26.748: INFO: Service rm3 in namespace e2e-tests-kubectl-6hxqt found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:05:28.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6hxqt" for this suite. Feb 19 11:05:54.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:05:56.053: INFO: namespace: e2e-tests-kubectl-6hxqt, resource: bindings, ignored listing per whitelist Feb 19 11:05:56.164: INFO: namespace e2e-tests-kubectl-6hxqt deletion completed in 27.394532519s • [SLOW TEST:45.885 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:05:56.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 19 11:05:56.351: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mltp9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mltp9/configmaps/e2e-watch-test-label-changed,UID:cd58d392-5307-11ea-a994-fa163e34d433,ResourceVersion:22189304,Generation:0,CreationTimestamp:2020-02-19 11:05:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 19 11:05:56.352: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mltp9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mltp9/configmaps/e2e-watch-test-label-changed,UID:cd58d392-5307-11ea-a994-fa163e34d433,ResourceVersion:22189305,Generation:0,CreationTimestamp:2020-02-19 11:05:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 19 11:05:56.352: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mltp9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mltp9/configmaps/e2e-watch-test-label-changed,UID:cd58d392-5307-11ea-a994-fa163e34d433,ResourceVersion:22189306,Generation:0,CreationTimestamp:2020-02-19 11:05:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 19 11:06:06.497: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mltp9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mltp9/configmaps/e2e-watch-test-label-changed,UID:cd58d392-5307-11ea-a994-fa163e34d433,ResourceVersion:22189319,Generation:0,CreationTimestamp:2020-02-19 11:05:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 19 11:06:06.498: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mltp9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mltp9/configmaps/e2e-watch-test-label-changed,UID:cd58d392-5307-11ea-a994-fa163e34d433,ResourceVersion:22189320,Generation:0,CreationTimestamp:2020-02-19 11:05:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 19 11:06:06.498: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mltp9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mltp9/configmaps/e2e-watch-test-label-changed,UID:cd58d392-5307-11ea-a994-fa163e34d433,ResourceVersion:22189321,Generation:0,CreationTimestamp:2020-02-19 11:05:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:06:06.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-mltp9" for this suite. Feb 19 11:06:12.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:06:12.866: INFO: namespace: e2e-tests-watch-mltp9, resource: bindings, ignored listing per whitelist Feb 19 11:06:12.881: INFO: namespace e2e-tests-watch-mltp9 deletion completed in 6.367754671s • [SLOW TEST:16.717 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:06:12.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 19 11:06:13.110: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d75512e1-5307-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-4b4xh" to be "success or failure" Feb 19 11:06:13.157: INFO: Pod "downwardapi-volume-d75512e1-5307-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 47.03509ms Feb 19 11:06:15.407: INFO: Pod "downwardapi-volume-d75512e1-5307-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297032168s Feb 19 11:06:17.420: INFO: Pod "downwardapi-volume-d75512e1-5307-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309999699s Feb 19 11:06:19.463: INFO: Pod "downwardapi-volume-d75512e1-5307-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.353268148s Feb 19 11:06:21.491: INFO: Pod "downwardapi-volume-d75512e1-5307-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.381049572s Feb 19 11:06:23.513: INFO: Pod "downwardapi-volume-d75512e1-5307-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.40335476s Feb 19 11:06:25.521: INFO: Pod "downwardapi-volume-d75512e1-5307-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.41107264s STEP: Saw pod success Feb 19 11:06:25.521: INFO: Pod "downwardapi-volume-d75512e1-5307-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:06:25.525: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d75512e1-5307-11ea-a0a3-0242ac110008 container client-container: STEP: delete the pod Feb 19 11:06:26.174: INFO: Waiting for pod downwardapi-volume-d75512e1-5307-11ea-a0a3-0242ac110008 to disappear Feb 19 11:06:26.506: INFO: Pod downwardapi-volume-d75512e1-5307-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:06:26.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4b4xh" for this suite. Feb 19 11:06:32.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:06:32.803: INFO: namespace: e2e-tests-projected-4b4xh, resource: bindings, ignored listing per whitelist Feb 19 11:06:32.955: INFO: namespace e2e-tests-projected-4b4xh deletion completed in 6.417679961s • [SLOW TEST:20.074 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:06:32.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 19 11:06:33.145: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 19 11:06:38.158: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 19 11:06:42.176: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 19 11:06:44.192: INFO: Creating deployment "test-rollover-deployment" Feb 19 11:06:44.214: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 19 11:06:46.232: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 19 11:06:46.252: INFO: Ensure that both replica sets have 1 created replica Feb 19 11:06:46.262: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 19 11:06:46.286: INFO: Updating deployment test-rollover-deployment Feb 19 11:06:46.286: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 19 11:06:48.578: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 19 11:06:48.594: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 19 11:06:48.603: INFO: all replica sets need to contain the pod-template-hash label Feb 19 11:06:48.603: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:06:51.574: INFO: all replica sets need to contain the pod-template-hash label Feb 19 11:06:51.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:06:52.696: INFO: all replica sets need to contain the pod-template-hash label Feb 19 11:06:52.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:06:54.736: INFO: all replica sets need to contain the pod-template-hash label Feb 19 11:06:54.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:06:59.681: INFO: all replica sets need to contain the pod-template-hash label Feb 19 11:06:59.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:07:00.795: INFO: all replica sets need to contain the pod-template-hash label Feb 19 11:07:00.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:07:02.665: INFO: all replica sets need to contain the pod-template-hash label Feb 19 11:07:02.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:07:04.645: INFO: all replica sets need to contain the pod-template-hash label Feb 19 11:07:04.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:07:06.654: INFO: all replica sets need to contain the pod-template-hash label Feb 19 11:07:06.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:07:08.640: INFO: all replica sets need to contain the pod-template-hash label Feb 19 11:07:08.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:07:10.626: INFO: all replica sets need to contain the pod-template-hash label Feb 19 11:07:10.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717707204, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:07:12.667: INFO: Feb 19 11:07:12.667: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 19 11:07:12.710: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-cl6vb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cl6vb/deployments/test-rollover-deployment,UID:e9e1cb0e-5307-11ea-a994-fa163e34d433,ResourceVersion:22189499,Generation:2,CreationTimestamp:2020-02-19 11:06:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-19 11:06:44 +0000 UTC 2020-02-19 11:06:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-19 11:07:12 +0000 UTC 2020-02-19 11:06:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 19 11:07:12.717: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-cl6vb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cl6vb/replicasets/test-rollover-deployment-5b8479fdb6,UID:eb210b9b-5307-11ea-a994-fa163e34d433,ResourceVersion:22189489,Generation:2,CreationTimestamp:2020-02-19 11:06:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e9e1cb0e-5307-11ea-a994-fa163e34d433 0xc001fbdfa7 0xc001fbdfa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 19 11:07:12.717: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 19 11:07:12.717: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-cl6vb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cl6vb/replicasets/test-rollover-controller,UID:e3485321-5307-11ea-a994-fa163e34d433,ResourceVersion:22189497,Generation:2,CreationTimestamp:2020-02-19 11:06:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e9e1cb0e-5307-11ea-a994-fa163e34d433 0xc001fbde17 0xc001fbde18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 19 11:07:12.717: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-cl6vb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cl6vb/replicasets/test-rollover-deployment-58494b7559,UID:e9e81eaf-5307-11ea-a994-fa163e34d433,ResourceVersion:22189450,Generation:2,CreationTimestamp:2020-02-19 11:06:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e9e1cb0e-5307-11ea-a994-fa163e34d433 0xc001fbded7 0xc001fbded8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 19 11:07:12.723: INFO: Pod "test-rollover-deployment-5b8479fdb6-h9tfc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-h9tfc,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-cl6vb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cl6vb/pods/test-rollover-deployment-5b8479fdb6-h9tfc,UID:eb6820eb-5307-11ea-a994-fa163e34d433,ResourceVersion:22189474,Generation:0,CreationTimestamp:2020-02-19 11:06:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 eb210b9b-5307-11ea-a994-fa163e34d433 0xc0014cbee7 0xc0014cbee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gp7dr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gp7dr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-gp7dr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014cbf50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014cbf70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:06:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:07:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:07:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:06:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-19 11:06:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-19 11:07:01 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://58fe940d2e4ca8a086a4c3ecd230686999bb9dcbf8778abe904ed10eddd9e844}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:07:12.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-cl6vb" for this suite. Feb 19 11:07:20.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:07:20.898: INFO: namespace: e2e-tests-deployment-cl6vb, resource: bindings, ignored listing per whitelist Feb 19 11:07:20.911: INFO: namespace e2e-tests-deployment-cl6vb deletion completed in 8.182023934s • [SLOW TEST:47.956 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:07:20.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 19 11:07:29.079: INFO: 10 pods remaining Feb 19 11:07:29.079: INFO: 10 pods has nil DeletionTimestamp Feb 19 11:07:29.079: INFO: Feb 19 11:07:31.122: INFO: 0 pods remaining Feb 19 11:07:31.122: INFO: 0 pods has nil DeletionTimestamp Feb 19 11:07:31.122: INFO: STEP: Gathering metrics W0219 11:07:32.095474 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 19 11:07:32.095: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:07:32.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-85wbx" for this suite. Feb 19 11:07:46.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:07:46.953: INFO: namespace: e2e-tests-gc-85wbx, resource: bindings, ignored listing per whitelist Feb 19 11:07:47.006: INFO: namespace e2e-tests-gc-85wbx deletion completed in 14.904371911s • [SLOW TEST:26.095 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:07:47.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 19 11:07:47.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Feb 19 11:07:47.843: INFO: stderr: "" Feb 19 11:07:47.843: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Feb 19 11:07:47.852: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:07:47.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2qc8l" for this suite. Feb 19 11:07:53.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:07:54.068: INFO: namespace: e2e-tests-kubectl-2qc8l, resource: bindings, ignored listing per whitelist Feb 19 11:07:54.209: INFO: namespace e2e-tests-kubectl-2qc8l deletion completed in 6.301491186s S [SKIPPING] [7.203 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 19 11:07:47.852: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:07:54.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 19 11:07:54.586: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13d36ab1-5308-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-7drc8" to be "success or failure" Feb 19 11:07:54.604: INFO: Pod "downwardapi-volume-13d36ab1-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.053226ms Feb 19 11:07:56.641: INFO: Pod "downwardapi-volume-13d36ab1-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054766303s Feb 19 11:07:58.663: INFO: Pod "downwardapi-volume-13d36ab1-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077289605s Feb 19 11:08:00.674: INFO: Pod "downwardapi-volume-13d36ab1-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087859337s Feb 19 11:08:02.696: INFO: Pod "downwardapi-volume-13d36ab1-5308-11ea-a0a3-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.109456044s Feb 19 11:08:04.725: INFO: Pod "downwardapi-volume-13d36ab1-5308-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.13918758s STEP: Saw pod success Feb 19 11:08:04.725: INFO: Pod "downwardapi-volume-13d36ab1-5308-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:08:04.732: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-13d36ab1-5308-11ea-a0a3-0242ac110008 container client-container: STEP: delete the pod Feb 19 11:08:04.968: INFO: Waiting for pod downwardapi-volume-13d36ab1-5308-11ea-a0a3-0242ac110008 to disappear Feb 19 11:08:04.993: INFO: Pod downwardapi-volume-13d36ab1-5308-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:08:04.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7drc8" for this suite. Feb 19 11:08:11.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:08:11.161: INFO: namespace: e2e-tests-downward-api-7drc8, resource: bindings, ignored listing per whitelist Feb 19 11:08:11.240: INFO: namespace e2e-tests-downward-api-7drc8 deletion completed in 6.188443328s • [SLOW TEST:17.030 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:08:11.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-q74f6 Feb 19 11:08:21.478: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-q74f6 STEP: checking the pod's current state and verifying that restartCount is present Feb 19 11:08:21.527: INFO: Initial restart count of pod liveness-http is 0 Feb 19 11:08:41.704: INFO: Restart count of pod e2e-tests-container-probe-q74f6/liveness-http is now 1 (20.17660616s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:08:41.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-q74f6" for this suite. Feb 19 11:08:47.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:08:48.075: INFO: namespace: e2e-tests-container-probe-q74f6, resource: bindings, ignored listing per whitelist Feb 19 11:08:48.118: INFO: namespace e2e-tests-container-probe-q74f6 deletion completed in 6.32640406s • [SLOW TEST:36.878 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:08:48.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 19 11:08:48.414: INFO: Waiting up to 5m0s for pod "downward-api-33e7dbf6-5308-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-q86ff" to be "success or failure" Feb 19 11:08:48.430: INFO: Pod "downward-api-33e7dbf6-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.062373ms Feb 19 11:08:50.604: INFO: Pod "downward-api-33e7dbf6-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189750781s Feb 19 11:08:52.618: INFO: Pod "downward-api-33e7dbf6-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203417006s Feb 19 11:08:55.293: INFO: Pod "downward-api-33e7dbf6-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.878572245s Feb 19 11:08:57.307: INFO: Pod "downward-api-33e7dbf6-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.893300894s Feb 19 11:08:59.367: INFO: Pod "downward-api-33e7dbf6-5308-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.952675956s STEP: Saw pod success Feb 19 11:08:59.367: INFO: Pod "downward-api-33e7dbf6-5308-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:08:59.373: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-33e7dbf6-5308-11ea-a0a3-0242ac110008 container dapi-container: STEP: delete the pod Feb 19 11:08:59.516: INFO: Waiting for pod downward-api-33e7dbf6-5308-11ea-a0a3-0242ac110008 to disappear Feb 19 11:08:59.655: INFO: Pod downward-api-33e7dbf6-5308-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:08:59.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q86ff" for this suite. Feb 19 11:09:05.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:09:05.891: INFO: namespace: e2e-tests-downward-api-q86ff, resource: bindings, ignored listing per whitelist Feb 19 11:09:05.946: INFO: namespace e2e-tests-downward-api-q86ff deletion completed in 6.254513677s • [SLOW TEST:17.827 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:09:05.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 19 11:09:06.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-lt76x' Feb 19 11:09:06.284: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 19 11:09:06.285: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Feb 19 11:09:10.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-lt76x' Feb 19 11:09:10.849: INFO: stderr: "" Feb 19 11:09:10.850: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:09:10.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lt76x" for this suite. Feb 19 11:09:35.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:09:35.406: INFO: namespace: e2e-tests-kubectl-lt76x, resource: bindings, ignored listing per whitelist Feb 19 11:09:35.443: INFO: namespace e2e-tests-kubectl-lt76x deletion completed in 24.321789572s • [SLOW TEST:29.497 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:09:35.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-5013e984-5308-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 19 11:09:35.670: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5014aefb-5308-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-m9grg" to be "success or failure" Feb 19 11:09:35.710: INFO: Pod "pod-projected-configmaps-5014aefb-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 40.363355ms Feb 19 11:09:37.944: INFO: Pod "pod-projected-configmaps-5014aefb-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273613579s Feb 19 11:09:39.961: INFO: Pod "pod-projected-configmaps-5014aefb-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290433042s Feb 19 11:09:41.974: INFO: Pod "pod-projected-configmaps-5014aefb-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.304223595s Feb 19 11:09:44.356: INFO: Pod "pod-projected-configmaps-5014aefb-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.685442311s Feb 19 11:09:46.690: INFO: Pod "pod-projected-configmaps-5014aefb-5308-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.019983055s STEP: Saw pod success Feb 19 11:09:46.690: INFO: Pod "pod-projected-configmaps-5014aefb-5308-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:09:46.709: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-5014aefb-5308-11ea-a0a3-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 19 11:09:47.035: INFO: Waiting for pod pod-projected-configmaps-5014aefb-5308-11ea-a0a3-0242ac110008 to disappear Feb 19 11:09:47.048: INFO: Pod pod-projected-configmaps-5014aefb-5308-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:09:47.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m9grg" for this suite. Feb 19 11:09:55.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:09:55.373: INFO: namespace: e2e-tests-projected-m9grg, resource: bindings, ignored listing per whitelist Feb 19 11:09:55.404: INFO: namespace e2e-tests-projected-m9grg deletion completed in 8.28802723s • [SLOW TEST:19.961 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:09:55.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 19 11:10:06.765: INFO: Successfully updated pod "annotationupdate5c08253f-5308-11ea-a0a3-0242ac110008" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:10:08.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-w2nft" for this suite. Feb 19 11:10:33.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:10:33.099: INFO: namespace: e2e-tests-downward-api-w2nft, resource: bindings, ignored listing per whitelist Feb 19 11:10:33.219: INFO: namespace e2e-tests-downward-api-w2nft deletion completed in 24.206994423s • [SLOW TEST:37.815 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:10:33.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-728fc002-5308-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 19 11:10:33.537: INFO: Waiting up to 5m0s for pod "pod-configmaps-72919b71-5308-11ea-a0a3-0242ac110008" in namespace "e2e-tests-configmap-qnhx9" to be "success or failure" Feb 19 11:10:33.654: INFO: Pod "pod-configmaps-72919b71-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 116.867413ms Feb 19 11:10:35.992: INFO: Pod "pod-configmaps-72919b71-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.455214027s Feb 19 11:10:38.007: INFO: Pod "pod-configmaps-72919b71-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47013016s Feb 19 11:10:40.141: INFO: Pod "pod-configmaps-72919b71-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.603913405s Feb 19 11:10:42.394: INFO: Pod "pod-configmaps-72919b71-5308-11ea-a0a3-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.857082516s Feb 19 11:10:44.418: INFO: Pod "pod-configmaps-72919b71-5308-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.881276849s STEP: Saw pod success Feb 19 11:10:44.418: INFO: Pod "pod-configmaps-72919b71-5308-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:10:44.426: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-72919b71-5308-11ea-a0a3-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 19 11:10:44.653: INFO: Waiting for pod pod-configmaps-72919b71-5308-11ea-a0a3-0242ac110008 to disappear Feb 19 11:10:45.218: INFO: Pod pod-configmaps-72919b71-5308-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:10:45.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qnhx9" for this suite. Feb 19 11:10:51.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:10:51.529: INFO: namespace: e2e-tests-configmap-qnhx9, resource: bindings, ignored listing per whitelist Feb 19 11:10:51.586: INFO: namespace e2e-tests-configmap-qnhx9 deletion completed in 6.350939922s • [SLOW TEST:18.366 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:10:51.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-mj9kf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-mj9kf to expose endpoints map[] Feb 19 11:10:51.879: INFO: Get endpoints failed (39.025652ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 19 11:10:52.910: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-mj9kf exposes endpoints map[] (1.069647374s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-mj9kf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-mj9kf to expose endpoints map[pod1:[100]] Feb 19 11:10:57.667: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.731821577s elapsed, will retry) Feb 19 11:11:02.783: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-mj9kf exposes endpoints map[pod1:[100]] (9.847440805s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-mj9kf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-mj9kf to expose endpoints map[pod1:[100] pod2:[101]] Feb 19 11:11:07.187: INFO: Unexpected endpoints: found map[7e23740b-5308-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.3826894s elapsed, will retry) Feb 19 11:11:11.845: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-mj9kf exposes endpoints map[pod1:[100] pod2:[101]] (9.040665136s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-mj9kf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-mj9kf to expose endpoints map[pod2:[101]] Feb 19 11:11:12.990: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-mj9kf exposes endpoints map[pod2:[101]] (1.096194539s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-mj9kf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-mj9kf to expose endpoints map[] Feb 19 11:11:14.087: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-mj9kf exposes endpoints map[] (1.084544324s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:11:15.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-mj9kf" for this suite. Feb 19 11:11:39.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:11:39.696: INFO: namespace: e2e-tests-services-mj9kf, resource: bindings, ignored listing per whitelist Feb 19 11:11:39.725: INFO: namespace e2e-tests-services-mj9kf deletion completed in 24.484403212s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:48.138 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:11:39.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-qvl8 STEP: Creating a pod to test atomic-volume-subpath Feb 19 11:11:40.033: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-qvl8" in namespace "e2e-tests-subpath-rhv5h" to be "success or failure" Feb 19 11:11:40.051: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.433709ms Feb 19 11:11:42.633: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.599560041s Feb 19 11:11:44.671: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.637969236s Feb 19 11:11:46.689: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.65519507s Feb 19 11:11:48.703: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.669707149s Feb 19 11:11:50.726: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.692979219s Feb 19 11:11:52.741: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.707257516s Feb 19 11:11:54.755: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.721756664s Feb 19 11:11:56.775: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.741489624s Feb 19 11:11:58.815: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Running", Reason="", readiness=false. Elapsed: 18.782008218s Feb 19 11:12:00.835: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Running", Reason="", readiness=false. Elapsed: 20.80187492s Feb 19 11:12:02.858: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Running", Reason="", readiness=false. Elapsed: 22.824931004s Feb 19 11:12:04.882: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Running", Reason="", readiness=false. Elapsed: 24.848402854s Feb 19 11:12:06.938: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Running", Reason="", readiness=false. Elapsed: 26.904519478s Feb 19 11:12:08.963: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Running", Reason="", readiness=false. Elapsed: 28.929418353s Feb 19 11:12:11.405: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Running", Reason="", readiness=false. Elapsed: 31.371134589s Feb 19 11:12:13.434: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Running", Reason="", readiness=false. Elapsed: 33.400170269s Feb 19 11:12:15.446: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Running", Reason="", readiness=false. Elapsed: 35.412617351s Feb 19 11:12:18.104: INFO: Pod "pod-subpath-test-secret-qvl8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.070810759s STEP: Saw pod success Feb 19 11:12:18.104: INFO: Pod "pod-subpath-test-secret-qvl8" satisfied condition "success or failure" Feb 19 11:12:18.117: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-qvl8 container test-container-subpath-secret-qvl8: STEP: delete the pod Feb 19 11:12:18.669: INFO: Waiting for pod pod-subpath-test-secret-qvl8 to disappear Feb 19 11:12:18.677: INFO: Pod pod-subpath-test-secret-qvl8 no longer exists STEP: Deleting pod pod-subpath-test-secret-qvl8 Feb 19 11:12:18.677: INFO: Deleting pod "pod-subpath-test-secret-qvl8" in namespace "e2e-tests-subpath-rhv5h" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:12:18.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-rhv5h" for this suite. Feb 19 11:12:24.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:12:24.810: INFO: namespace: e2e-tests-subpath-rhv5h, resource: bindings, ignored listing per whitelist Feb 19 11:12:24.933: INFO: namespace e2e-tests-subpath-rhv5h deletion completed in 6.244439946s • [SLOW TEST:45.208 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:12:24.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-b50dbe2e-5308-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 19 11:12:25.134: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b50e5c77-5308-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-z2997" to be "success or failure" Feb 19 11:12:25.147: INFO: Pod "pod-projected-configmaps-b50e5c77-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.58625ms Feb 19 11:12:27.168: INFO: Pod "pod-projected-configmaps-b50e5c77-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033853998s Feb 19 11:12:29.190: INFO: Pod "pod-projected-configmaps-b50e5c77-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054990424s Feb 19 11:12:31.552: INFO: Pod "pod-projected-configmaps-b50e5c77-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417006317s Feb 19 11:12:33.953: INFO: Pod "pod-projected-configmaps-b50e5c77-5308-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.818552137s Feb 19 11:12:35.973: INFO: Pod "pod-projected-configmaps-b50e5c77-5308-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.838358567s STEP: Saw pod success Feb 19 11:12:35.973: INFO: Pod "pod-projected-configmaps-b50e5c77-5308-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:12:35.979: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b50e5c77-5308-11ea-a0a3-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 19 11:12:36.499: INFO: Waiting for pod pod-projected-configmaps-b50e5c77-5308-11ea-a0a3-0242ac110008 to disappear Feb 19 11:12:36.529: INFO: Pod pod-projected-configmaps-b50e5c77-5308-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:12:36.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z2997" for this suite. Feb 19 11:12:42.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:12:42.762: INFO: namespace: e2e-tests-projected-z2997, resource: bindings, ignored listing per whitelist Feb 19 11:12:42.942: INFO: namespace e2e-tests-projected-z2997 deletion completed in 6.403586174s • [SLOW TEST:18.006 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:12:42.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-r7b6g Feb 19 11:12:53.264: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-r7b6g STEP: checking the pod's current state and verifying that restartCount is present Feb 19 11:12:53.271: INFO: Initial restart count of pod liveness-exec is 0 Feb 19 11:13:47.786: INFO: Restart count of pod e2e-tests-container-probe-r7b6g/liveness-exec is now 1 (54.515198259s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:13:47.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-r7b6g" for this suite. Feb 19 11:13:55.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:13:56.012: INFO: namespace: e2e-tests-container-probe-r7b6g, resource: bindings, ignored listing per whitelist Feb 19 11:13:56.099: INFO: namespace e2e-tests-container-probe-r7b6g deletion completed in 8.23450843s • [SLOW TEST:73.157 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:13:56.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 19 11:13:56.418: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 19 11:14:01.489: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:14:02.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-hbv64" for this suite. Feb 19 11:14:10.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:14:11.270: INFO: namespace: e2e-tests-replication-controller-hbv64, resource: bindings, ignored listing per whitelist Feb 19 11:14:11.317: INFO: namespace e2e-tests-replication-controller-hbv64 deletion completed in 8.683350416s • [SLOW TEST:15.218 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:14:11.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 19 11:14:13.223: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Feb 19 11:14:13.234: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6pdxz/daemonsets","resourceVersion":"22190509"},"items":null} Feb 19 11:14:13.242: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6pdxz/pods","resourceVersion":"22190509"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:14:13.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6pdxz" for this suite. Feb 19 11:14:19.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:14:19.362: INFO: namespace: e2e-tests-daemonsets-6pdxz, resource: bindings, ignored listing per whitelist Feb 19 11:14:19.487: INFO: namespace e2e-tests-daemonsets-6pdxz deletion completed in 6.23311932s S [SKIPPING] [8.169 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 19 11:14:13.223: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:14:19.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-jr7kb [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Feb 19 11:14:19.722: INFO: Found 0 stateful pods, waiting for 3 Feb 19 11:14:29.776: INFO: Found 2 stateful pods, waiting for 3 Feb 19 11:14:39.738: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 19 11:14:39.738: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 19 11:14:39.738: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 19 11:14:49.746: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 19 11:14:49.747: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 19 11:14:49.747: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 19 11:14:49.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jr7kb ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 19 11:14:51.128: INFO: stderr: "I0219 11:14:50.036419 689 log.go:172] (0xc000714370) (0xc0007ae640) Create stream\nI0219 11:14:50.036681 689 log.go:172] (0xc000714370) (0xc0007ae640) Stream added, broadcasting: 1\nI0219 11:14:50.044605 689 log.go:172] (0xc000714370) Reply frame received for 1\nI0219 11:14:50.044629 689 log.go:172] (0xc000714370) (0xc0005badc0) Create stream\nI0219 11:14:50.044635 689 log.go:172] (0xc000714370) (0xc0005badc0) Stream added, broadcasting: 3\nI0219 11:14:50.046484 689 log.go:172] (0xc000714370) Reply frame received for 3\nI0219 11:14:50.046561 689 log.go:172] (0xc000714370) (0xc000704000) Create stream\nI0219 11:14:50.046573 689 log.go:172] (0xc000714370) (0xc000704000) Stream added, broadcasting: 5\nI0219 11:14:50.047806 689 log.go:172] (0xc000714370) Reply frame received for 5\nI0219 11:14:50.985600 689 log.go:172] (0xc000714370) Data frame received for 3\nI0219 11:14:50.985682 689 log.go:172] (0xc0005badc0) (3) Data frame handling\nI0219 11:14:50.985752 689 log.go:172] (0xc0005badc0) (3) Data frame sent\nI0219 11:14:51.118866 689 log.go:172] (0xc000714370) (0xc0005badc0) Stream removed, broadcasting: 3\nI0219 11:14:51.118961 689 log.go:172] (0xc000714370) Data frame received for 1\nI0219 11:14:51.118981 689 log.go:172] (0xc0007ae640) (1) Data frame handling\nI0219 11:14:51.119005 689 log.go:172] (0xc0007ae640) (1) Data frame sent\nI0219 11:14:51.119032 689 log.go:172] (0xc000714370) (0xc0007ae640) Stream removed, broadcasting: 1\nI0219 11:14:51.119120 689 log.go:172] (0xc000714370) (0xc000704000) Stream removed, broadcasting: 5\nI0219 11:14:51.119218 689 log.go:172] (0xc000714370) Go away received\nI0219 11:14:51.119390 689 log.go:172] (0xc000714370) (0xc0007ae640) Stream removed, broadcasting: 1\nI0219 11:14:51.119412 689 log.go:172] (0xc000714370) (0xc0005badc0) Stream removed, broadcasting: 3\nI0219 11:14:51.119424 689 log.go:172] (0xc000714370) (0xc000704000) Stream removed, broadcasting: 5\n" Feb 19 11:14:51.129: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 19 11:14:51.129: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 19 11:15:01.209: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 19 11:15:11.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jr7kb ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 19 11:15:11.928: INFO: stderr: "I0219 11:15:11.452573 711 log.go:172] (0xc0008662c0) (0xc000740640) Create stream\nI0219 11:15:11.452786 711 log.go:172] (0xc0008662c0) (0xc000740640) Stream added, broadcasting: 1\nI0219 11:15:11.461212 711 log.go:172] (0xc0008662c0) Reply frame received for 1\nI0219 11:15:11.461298 711 log.go:172] (0xc0008662c0) (0xc0007406e0) Create stream\nI0219 11:15:11.461307 711 log.go:172] (0xc0008662c0) (0xc0007406e0) Stream added, broadcasting: 3\nI0219 11:15:11.469316 711 log.go:172] (0xc0008662c0) Reply frame received for 3\nI0219 11:15:11.469355 711 log.go:172] (0xc0008662c0) (0xc0005a2d20) Create stream\nI0219 11:15:11.469364 711 log.go:172] (0xc0008662c0) (0xc0005a2d20) Stream added, broadcasting: 5\nI0219 11:15:11.471135 711 log.go:172] (0xc0008662c0) Reply frame received for 5\nI0219 11:15:11.642080 711 log.go:172] (0xc0008662c0) Data frame received for 3\nI0219 11:15:11.642127 711 log.go:172] (0xc0007406e0) (3) Data frame handling\nI0219 11:15:11.642161 711 log.go:172] (0xc0007406e0) (3) Data frame sent\nI0219 11:15:11.916483 711 log.go:172] (0xc0008662c0) (0xc0007406e0) Stream removed, broadcasting: 3\nI0219 11:15:11.916632 711 log.go:172] (0xc0008662c0) Data frame received for 1\nI0219 11:15:11.916651 711 log.go:172] (0xc000740640) (1) Data frame handling\nI0219 11:15:11.916666 711 log.go:172] (0xc000740640) (1) Data frame sent\nI0219 11:15:11.916675 711 log.go:172] (0xc0008662c0) (0xc000740640) Stream removed, broadcasting: 1\nI0219 11:15:11.916691 711 log.go:172] (0xc0008662c0) (0xc0005a2d20) Stream removed, broadcasting: 5\nI0219 11:15:11.916769 711 log.go:172] (0xc0008662c0) Go away received\nI0219 11:15:11.917111 711 log.go:172] (0xc0008662c0) (0xc000740640) Stream removed, broadcasting: 1\nI0219 11:15:11.917223 711 log.go:172] (0xc0008662c0) (0xc0007406e0) Stream removed, broadcasting: 3\nI0219 11:15:11.917290 711 log.go:172] (0xc0008662c0) (0xc0005a2d20) Stream removed, broadcasting: 5\n" Feb 19 11:15:11.928: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 19 11:15:11.928: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 19 11:15:22.504: INFO: Waiting for StatefulSet e2e-tests-statefulset-jr7kb/ss2 to complete update Feb 19 11:15:22.504: INFO: Waiting for Pod e2e-tests-statefulset-jr7kb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 19 11:15:22.504: INFO: Waiting for Pod e2e-tests-statefulset-jr7kb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 19 11:15:32.554: INFO: Waiting for StatefulSet e2e-tests-statefulset-jr7kb/ss2 to complete update Feb 19 11:15:32.554: INFO: Waiting for Pod e2e-tests-statefulset-jr7kb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 19 11:15:32.554: INFO: Waiting for Pod e2e-tests-statefulset-jr7kb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 19 11:15:42.841: INFO: Waiting for StatefulSet e2e-tests-statefulset-jr7kb/ss2 to complete update Feb 19 11:15:42.841: INFO: Waiting for Pod e2e-tests-statefulset-jr7kb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 19 11:15:52.602: INFO: Waiting for StatefulSet e2e-tests-statefulset-jr7kb/ss2 to complete update Feb 19 11:15:52.602: INFO: Waiting for Pod e2e-tests-statefulset-jr7kb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 19 11:16:02.679: INFO: Waiting for StatefulSet e2e-tests-statefulset-jr7kb/ss2 to complete update Feb 19 11:16:02.679: INFO: Waiting for Pod e2e-tests-statefulset-jr7kb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 19 11:16:12.584: INFO: Waiting for StatefulSet e2e-tests-statefulset-jr7kb/ss2 to complete update STEP: Rolling back to a previous revision Feb 19 11:16:22.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jr7kb ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 19 11:16:23.058: INFO: stderr: "I0219 11:16:22.695328 733 log.go:172] (0xc0006840b0) (0xc0006434a0) Create stream\nI0219 11:16:22.695761 733 log.go:172] (0xc0006840b0) (0xc0006434a0) Stream added, broadcasting: 1\nI0219 11:16:22.701902 733 log.go:172] (0xc0006840b0) Reply frame received for 1\nI0219 11:16:22.701955 733 log.go:172] (0xc0006840b0) (0xc000682000) Create stream\nI0219 11:16:22.701962 733 log.go:172] (0xc0006840b0) (0xc000682000) Stream added, broadcasting: 3\nI0219 11:16:22.703005 733 log.go:172] (0xc0006840b0) Reply frame received for 3\nI0219 11:16:22.703026 733 log.go:172] (0xc0006840b0) (0xc000574000) Create stream\nI0219 11:16:22.703038 733 log.go:172] (0xc0006840b0) (0xc000574000) Stream added, broadcasting: 5\nI0219 11:16:22.703796 733 log.go:172] (0xc0006840b0) Reply frame received for 5\nI0219 11:16:22.892465 733 log.go:172] (0xc0006840b0) Data frame received for 3\nI0219 11:16:22.892520 733 log.go:172] (0xc000682000) (3) Data frame handling\nI0219 11:16:22.892556 733 log.go:172] (0xc000682000) (3) Data frame sent\nI0219 11:16:23.047266 733 log.go:172] (0xc0006840b0) (0xc000682000) Stream removed, broadcasting: 3\nI0219 11:16:23.047955 733 log.go:172] (0xc0006840b0) Data frame received for 1\nI0219 11:16:23.048054 733 log.go:172] (0xc0006840b0) (0xc000574000) Stream removed, broadcasting: 5\nI0219 11:16:23.048103 733 log.go:172] (0xc0006434a0) (1) Data frame handling\nI0219 11:16:23.048123 733 log.go:172] (0xc0006434a0) (1) Data frame sent\nI0219 11:16:23.048138 733 log.go:172] (0xc0006840b0) (0xc0006434a0) Stream removed, broadcasting: 1\nI0219 11:16:23.048167 733 log.go:172] (0xc0006840b0) Go away received\nI0219 11:16:23.048574 733 log.go:172] (0xc0006840b0) (0xc0006434a0) Stream removed, broadcasting: 1\nI0219 11:16:23.048605 733 log.go:172] (0xc0006840b0) (0xc000682000) Stream removed, broadcasting: 3\nI0219 11:16:23.048617 733 log.go:172] (0xc0006840b0) (0xc000574000) Stream removed, broadcasting: 5\n" Feb 19 11:16:23.058: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 19 11:16:23.058: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 19 11:16:33.134: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 19 11:16:43.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jr7kb ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 19 11:16:44.124: INFO: stderr: "I0219 11:16:43.652028 755 log.go:172] (0xc00081a160) (0xc0005ae6e0) Create stream\nI0219 11:16:43.652248 755 log.go:172] (0xc00081a160) (0xc0005ae6e0) Stream added, broadcasting: 1\nI0219 11:16:43.661983 755 log.go:172] (0xc00081a160) Reply frame received for 1\nI0219 11:16:43.662059 755 log.go:172] (0xc00081a160) (0xc00057caa0) Create stream\nI0219 11:16:43.662153 755 log.go:172] (0xc00081a160) (0xc00057caa0) Stream added, broadcasting: 3\nI0219 11:16:43.664625 755 log.go:172] (0xc00081a160) Reply frame received for 3\nI0219 11:16:43.664669 755 log.go:172] (0xc00081a160) (0xc0005ae780) Create stream\nI0219 11:16:43.664684 755 log.go:172] (0xc00081a160) (0xc0005ae780) Stream added, broadcasting: 5\nI0219 11:16:43.666503 755 log.go:172] (0xc00081a160) Reply frame received for 5\nI0219 11:16:43.937754 755 log.go:172] (0xc00081a160) Data frame received for 3\nI0219 11:16:43.937887 755 log.go:172] (0xc00057caa0) (3) Data frame handling\nI0219 11:16:43.937934 755 log.go:172] (0xc00057caa0) (3) Data frame sent\nI0219 11:16:44.114170 755 log.go:172] (0xc00081a160) (0xc00057caa0) Stream removed, broadcasting: 3\nI0219 11:16:44.114263 755 log.go:172] (0xc00081a160) Data frame received for 1\nI0219 11:16:44.114289 755 log.go:172] (0xc0005ae6e0) (1) Data frame handling\nI0219 11:16:44.114308 755 log.go:172] (0xc0005ae6e0) (1) Data frame sent\nI0219 11:16:44.114324 755 log.go:172] (0xc00081a160) (0xc0005ae6e0) Stream removed, broadcasting: 1\nI0219 11:16:44.114655 755 log.go:172] (0xc00081a160) (0xc0005ae780) Stream removed, broadcasting: 5\nI0219 11:16:44.114715 755 log.go:172] (0xc00081a160) Go away received\nI0219 11:16:44.114811 755 log.go:172] (0xc00081a160) (0xc0005ae6e0) Stream removed, broadcasting: 1\nI0219 11:16:44.114830 755 log.go:172] (0xc00081a160) (0xc00057caa0) Stream removed, broadcasting: 3\nI0219 11:16:44.114843 755 log.go:172] (0xc00081a160) (0xc0005ae780) Stream removed, broadcasting: 5\n" Feb 19 11:16:44.125: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 19 11:16:44.125: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 19 11:16:54.175: INFO: Waiting for StatefulSet e2e-tests-statefulset-jr7kb/ss2 to complete update Feb 19 11:16:54.175: INFO: Waiting for Pod e2e-tests-statefulset-jr7kb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 19 11:16:54.175: INFO: Waiting for Pod e2e-tests-statefulset-jr7kb/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 19 11:17:04.701: INFO: Waiting for StatefulSet e2e-tests-statefulset-jr7kb/ss2 to complete update Feb 19 11:17:04.701: INFO: Waiting for Pod e2e-tests-statefulset-jr7kb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 19 11:17:04.701: INFO: Waiting for Pod e2e-tests-statefulset-jr7kb/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 19 11:17:14.427: INFO: Waiting for StatefulSet e2e-tests-statefulset-jr7kb/ss2 to complete update Feb 19 11:17:14.427: INFO: Waiting for Pod e2e-tests-statefulset-jr7kb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 19 11:17:24.349: INFO: Waiting for StatefulSet e2e-tests-statefulset-jr7kb/ss2 to complete update Feb 19 11:17:24.349: INFO: Waiting for Pod e2e-tests-statefulset-jr7kb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 19 11:17:34.535: INFO: Waiting for StatefulSet e2e-tests-statefulset-jr7kb/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 19 11:17:44.203: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jr7kb Feb 19 11:17:44.211: INFO: Scaling statefulset ss2 to 0 Feb 19 11:18:14.348: INFO: Waiting for statefulset status.replicas updated to 0 Feb 19 11:18:14.360: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:18:14.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-jr7kb" for this suite. Feb 19 11:18:22.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:18:22.742: INFO: namespace: e2e-tests-statefulset-jr7kb, resource: bindings, ignored listing per whitelist Feb 19 11:18:22.897: INFO: namespace e2e-tests-statefulset-jr7kb deletion completed in 8.332805963s • [SLOW TEST:243.410 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:18:22.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-8a6b49d8-5309-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume secrets Feb 19 11:18:23.104: INFO: Waiting up to 5m0s for pod "pod-secrets-8a6bed7c-5309-11ea-a0a3-0242ac110008" in namespace "e2e-tests-secrets-hlx2m" to be "success or failure" Feb 19 11:18:23.118: INFO: Pod "pod-secrets-8a6bed7c-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.777693ms Feb 19 11:18:25.247: INFO: Pod "pod-secrets-8a6bed7c-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143408497s Feb 19 11:18:27.257: INFO: Pod "pod-secrets-8a6bed7c-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153709558s Feb 19 11:18:29.272: INFO: Pod "pod-secrets-8a6bed7c-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168763024s Feb 19 11:18:31.289: INFO: Pod "pod-secrets-8a6bed7c-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185123085s Feb 19 11:18:33.305: INFO: Pod "pod-secrets-8a6bed7c-5309-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.201016813s STEP: Saw pod success Feb 19 11:18:33.305: INFO: Pod "pod-secrets-8a6bed7c-5309-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:18:33.336: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-8a6bed7c-5309-11ea-a0a3-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 19 11:18:33.648: INFO: Waiting for pod pod-secrets-8a6bed7c-5309-11ea-a0a3-0242ac110008 to disappear Feb 19 11:18:33.660: INFO: Pod pod-secrets-8a6bed7c-5309-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:18:33.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hlx2m" for this suite. Feb 19 11:18:40.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:18:41.083: INFO: namespace: e2e-tests-secrets-hlx2m, resource: bindings, ignored listing per whitelist Feb 19 11:18:41.107: INFO: namespace e2e-tests-secrets-hlx2m deletion completed in 7.37709056s • [SLOW TEST:18.210 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:18:41.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 19 11:18:41.606: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95740833-5309-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-rvfjj" to be "success or failure" Feb 19 11:18:41.789: INFO: Pod "downwardapi-volume-95740833-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 183.006989ms Feb 19 11:18:43.878: INFO: Pod "downwardapi-volume-95740833-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272372558s Feb 19 11:18:45.895: INFO: Pod "downwardapi-volume-95740833-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288993671s Feb 19 11:18:48.633: INFO: Pod "downwardapi-volume-95740833-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.026650336s Feb 19 11:18:51.088: INFO: Pod "downwardapi-volume-95740833-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.481544588s Feb 19 11:18:53.102: INFO: Pod "downwardapi-volume-95740833-5309-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.495613338s STEP: Saw pod success Feb 19 11:18:53.102: INFO: Pod "downwardapi-volume-95740833-5309-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:18:53.107: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-95740833-5309-11ea-a0a3-0242ac110008 container client-container: STEP: delete the pod Feb 19 11:18:53.270: INFO: Waiting for pod downwardapi-volume-95740833-5309-11ea-a0a3-0242ac110008 to disappear Feb 19 11:18:53.290: INFO: Pod downwardapi-volume-95740833-5309-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:18:53.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rvfjj" for this suite. Feb 19 11:18:59.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:18:59.652: INFO: namespace: e2e-tests-projected-rvfjj, resource: bindings, ignored listing per whitelist Feb 19 11:18:59.681: INFO: namespace e2e-tests-projected-rvfjj deletion completed in 6.378812321s • [SLOW TEST:18.573 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:18:59.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-a0717dd4-5309-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume secrets Feb 19 11:19:00.003: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a072fe82-5309-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-gwvdj" to be "success or failure" Feb 19 11:19:00.030: INFO: Pod "pod-projected-secrets-a072fe82-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 26.342733ms Feb 19 11:19:02.058: INFO: Pod "pod-projected-secrets-a072fe82-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054904579s Feb 19 11:19:04.083: INFO: Pod "pod-projected-secrets-a072fe82-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079575038s Feb 19 11:19:06.731: INFO: Pod "pod-projected-secrets-a072fe82-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.727803789s Feb 19 11:19:08.773: INFO: Pod "pod-projected-secrets-a072fe82-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.7696376s Feb 19 11:19:10.798: INFO: Pod "pod-projected-secrets-a072fe82-5309-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.794825034s STEP: Saw pod success Feb 19 11:19:10.798: INFO: Pod "pod-projected-secrets-a072fe82-5309-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:19:10.810: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a072fe82-5309-11ea-a0a3-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 19 11:19:10.900: INFO: Waiting for pod pod-projected-secrets-a072fe82-5309-11ea-a0a3-0242ac110008 to disappear Feb 19 11:19:10.907: INFO: Pod pod-projected-secrets-a072fe82-5309-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:19:10.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gwvdj" for this suite. Feb 19 11:19:16.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:19:17.145: INFO: namespace: e2e-tests-projected-gwvdj, resource: bindings, ignored listing per whitelist Feb 19 11:19:17.274: INFO: namespace e2e-tests-projected-gwvdj deletion completed in 6.354880463s • [SLOW TEST:17.593 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:19:17.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 19 11:19:17.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-6ltbs' Feb 19 11:19:19.333: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 19 11:19:19.333: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Feb 19 11:19:21.410: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-cgkqf] Feb 19 11:19:21.411: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-cgkqf" in namespace "e2e-tests-kubectl-6ltbs" to be "running and ready" Feb 19 11:19:21.475: INFO: Pod "e2e-test-nginx-rc-cgkqf": Phase="Pending", Reason="", readiness=false. Elapsed: 63.938385ms Feb 19 11:19:23.506: INFO: Pod "e2e-test-nginx-rc-cgkqf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095749018s Feb 19 11:19:26.037: INFO: Pod "e2e-test-nginx-rc-cgkqf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.626434537s Feb 19 11:19:28.056: INFO: Pod "e2e-test-nginx-rc-cgkqf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.645078167s Feb 19 11:19:30.094: INFO: Pod "e2e-test-nginx-rc-cgkqf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.683567795s Feb 19 11:19:32.111: INFO: Pod "e2e-test-nginx-rc-cgkqf": Phase="Running", Reason="", readiness=true. Elapsed: 10.700800119s Feb 19 11:19:32.112: INFO: Pod "e2e-test-nginx-rc-cgkqf" satisfied condition "running and ready" Feb 19 11:19:32.112: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-cgkqf] Feb 19 11:19:32.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-6ltbs' Feb 19 11:19:32.321: INFO: stderr: "" Feb 19 11:19:32.321: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Feb 19 11:19:32.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-6ltbs' Feb 19 11:19:32.441: INFO: stderr: "" Feb 19 11:19:32.441: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:19:32.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6ltbs" for this suite. Feb 19 11:19:54.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:19:54.637: INFO: namespace: e2e-tests-kubectl-6ltbs, resource: bindings, ignored listing per whitelist Feb 19 11:19:54.675: INFO: namespace e2e-tests-kubectl-6ltbs deletion completed in 22.209401411s • [SLOW TEST:37.401 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:19:54.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 19 11:20:05.567: INFO: Successfully updated pod "labelsupdatec12a1e3e-5309-11ea-a0a3-0242ac110008" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:20:07.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vtczg" for this suite. Feb 19 11:20:31.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:20:31.932: INFO: namespace: e2e-tests-downward-api-vtczg, resource: bindings, ignored listing per whitelist Feb 19 11:20:31.970: INFO: namespace e2e-tests-downward-api-vtczg deletion completed in 24.206513848s • [SLOW TEST:37.295 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:20:31.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-5sn9d STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5sn9d to expose endpoints map[] Feb 19 11:20:32.382: INFO: Get endpoints failed (10.162736ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 19 11:20:33.401: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5sn9d exposes endpoints map[] (1.02921056s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-5sn9d STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5sn9d to expose endpoints map[pod1:[80]] Feb 19 11:20:37.925: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.47630402s elapsed, will retry) Feb 19 11:20:43.505: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5sn9d exposes endpoints map[pod1:[80]] (10.055960083s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-5sn9d STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5sn9d to expose endpoints map[pod1:[80] pod2:[80]] Feb 19 11:20:47.914: INFO: Unexpected endpoints: found map[d82522c2-5309-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.394288634s elapsed, will retry) Feb 19 11:20:53.902: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5sn9d exposes endpoints map[pod1:[80] pod2:[80]] (10.382245005s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-5sn9d STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5sn9d to expose endpoints map[pod2:[80]] Feb 19 11:20:55.120: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5sn9d exposes endpoints map[pod2:[80]] (1.182587055s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-5sn9d STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5sn9d to expose endpoints map[] Feb 19 11:20:55.261: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5sn9d exposes endpoints map[] (125.000697ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:20:55.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-5sn9d" for this suite. Feb 19 11:21:19.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:21:19.734: INFO: namespace: e2e-tests-services-5sn9d, resource: bindings, ignored listing per whitelist Feb 19 11:21:19.765: INFO: namespace e2e-tests-services-5sn9d deletion completed in 24.327919242s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:47.795 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:21:19.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-f3e6c14e-5309-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 19 11:21:20.017: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f3e78414-5309-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-v59vz" to be "success or failure" Feb 19 11:21:20.058: INFO: Pod "pod-projected-configmaps-f3e78414-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 41.271183ms Feb 19 11:21:22.348: INFO: Pod "pod-projected-configmaps-f3e78414-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331352517s Feb 19 11:21:24.363: INFO: Pod "pod-projected-configmaps-f3e78414-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346330375s Feb 19 11:21:26.414: INFO: Pod "pod-projected-configmaps-f3e78414-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.397287458s Feb 19 11:21:28.534: INFO: Pod "pod-projected-configmaps-f3e78414-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.516953651s Feb 19 11:21:30.619: INFO: Pod "pod-projected-configmaps-f3e78414-5309-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.602018157s STEP: Saw pod success Feb 19 11:21:30.619: INFO: Pod "pod-projected-configmaps-f3e78414-5309-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:21:30.654: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f3e78414-5309-11ea-a0a3-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 19 11:21:30.833: INFO: Waiting for pod pod-projected-configmaps-f3e78414-5309-11ea-a0a3-0242ac110008 to disappear Feb 19 11:21:30.842: INFO: Pod pod-projected-configmaps-f3e78414-5309-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:21:30.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-v59vz" for this suite. Feb 19 11:21:36.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:21:36.993: INFO: namespace: e2e-tests-projected-v59vz, resource: bindings, ignored listing per whitelist Feb 19 11:21:37.130: INFO: namespace e2e-tests-projected-v59vz deletion completed in 6.249259199s • [SLOW TEST:17.365 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:21:37.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 19 11:21:37.264: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe2ee810-5309-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-5szxg" to be "success or failure" Feb 19 11:21:37.271: INFO: Pod "downwardapi-volume-fe2ee810-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.836832ms Feb 19 11:21:39.342: INFO: Pod "downwardapi-volume-fe2ee810-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078158561s Feb 19 11:21:41.350: INFO: Pod "downwardapi-volume-fe2ee810-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085273351s Feb 19 11:21:43.883: INFO: Pod "downwardapi-volume-fe2ee810-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.61914491s Feb 19 11:21:45.896: INFO: Pod "downwardapi-volume-fe2ee810-5309-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.632095935s Feb 19 11:21:47.910: INFO: Pod "downwardapi-volume-fe2ee810-5309-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.645282137s STEP: Saw pod success Feb 19 11:21:47.910: INFO: Pod "downwardapi-volume-fe2ee810-5309-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:21:47.920: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fe2ee810-5309-11ea-a0a3-0242ac110008 container client-container: STEP: delete the pod Feb 19 11:21:48.015: INFO: Waiting for pod downwardapi-volume-fe2ee810-5309-11ea-a0a3-0242ac110008 to disappear Feb 19 11:21:48.085: INFO: Pod downwardapi-volume-fe2ee810-5309-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:21:48.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5szxg" for this suite. Feb 19 11:21:54.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:21:54.226: INFO: namespace: e2e-tests-downward-api-5szxg, resource: bindings, ignored listing per whitelist Feb 19 11:21:54.328: INFO: namespace e2e-tests-downward-api-5szxg deletion completed in 6.222748026s • [SLOW TEST:17.197 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:21:54.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 19 11:21:54.602: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0884c61b-530a-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-kcbwn" to be "success or failure" Feb 19 11:21:54.658: INFO: Pod "downwardapi-volume-0884c61b-530a-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 56.242908ms Feb 19 11:21:56.682: INFO: Pod "downwardapi-volume-0884c61b-530a-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080685981s Feb 19 11:21:58.701: INFO: Pod "downwardapi-volume-0884c61b-530a-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099394412s Feb 19 11:22:01.015: INFO: Pod "downwardapi-volume-0884c61b-530a-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41337095s Feb 19 11:22:03.029: INFO: Pod "downwardapi-volume-0884c61b-530a-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.427003307s Feb 19 11:22:05.242: INFO: Pod "downwardapi-volume-0884c61b-530a-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.640370682s STEP: Saw pod success Feb 19 11:22:05.242: INFO: Pod "downwardapi-volume-0884c61b-530a-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:22:05.280: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0884c61b-530a-11ea-a0a3-0242ac110008 container client-container: STEP: delete the pod Feb 19 11:22:06.304: INFO: Waiting for pod downwardapi-volume-0884c61b-530a-11ea-a0a3-0242ac110008 to disappear Feb 19 11:22:06.356: INFO: Pod downwardapi-volume-0884c61b-530a-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:22:06.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kcbwn" for this suite. Feb 19 11:22:12.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:22:12.734: INFO: namespace: e2e-tests-downward-api-kcbwn, resource: bindings, ignored listing per whitelist Feb 19 11:22:12.831: INFO: namespace e2e-tests-downward-api-kcbwn deletion completed in 6.443227027s • [SLOW TEST:18.503 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:22:12.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 19 11:22:13.029: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:22:23.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ld6tn" for this suite. Feb 19 11:23:17.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:23:17.342: INFO: namespace: e2e-tests-pods-ld6tn, resource: bindings, ignored listing per whitelist Feb 19 11:23:17.451: INFO: namespace e2e-tests-pods-ld6tn deletion completed in 54.227865512s • [SLOW TEST:64.619 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:23:17.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 19 11:23:17.667: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 19 11:23:17.680: INFO: Waiting for terminating namespaces to be deleted... Feb 19 11:23:17.689: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 19 11:23:17.705: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 19 11:23:17.705: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 19 11:23:17.705: INFO: Container weave ready: true, restart count 0 Feb 19 11:23:17.705: INFO: Container weave-npc ready: true, restart count 0 Feb 19 11:23:17.705: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 19 11:23:17.705: INFO: Container coredns ready: true, restart count 0 Feb 19 11:23:17.705: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 19 11:23:17.705: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 19 11:23:17.705: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 19 11:23:17.705: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 19 11:23:17.705: INFO: Container coredns ready: true, restart count 0 Feb 19 11:23:17.705: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 19 11:23:17.705: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Feb 19 11:23:17.880: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 19 11:23:17.880: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 19 11:23:17.880: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 19 11:23:17.880: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Feb 19 11:23:17.880: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Feb 19 11:23:17.880: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 19 11:23:17.880: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 19 11:23:17.880: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3a29a152-530a-11ea-a0a3-0242ac110008.15f4c9ef1ca62a41], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-tmssz/filler-pod-3a29a152-530a-11ea-a0a3-0242ac110008 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-3a29a152-530a-11ea-a0a3-0242ac110008.15f4c9f03531ad64], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3a29a152-530a-11ea-a0a3-0242ac110008.15f4c9f0c0cc1b09], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-3a29a152-530a-11ea-a0a3-0242ac110008.15f4c9f0e958fcd6], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f4c9f1731c2719], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:23:29.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-tmssz" for this suite. Feb 19 11:23:37.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:23:37.397: INFO: namespace: e2e-tests-sched-pred-tmssz, resource: bindings, ignored listing per whitelist Feb 19 11:23:37.445: INFO: namespace e2e-tests-sched-pred-tmssz deletion completed in 8.305525966s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:19.993 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:23:37.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 19 11:26:43.365: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:26:43.398: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:26:45.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:26:45.408: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:26:47.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:26:47.414: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:26:49.399: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:26:49.429: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:26:51.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:26:51.418: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:26:53.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:26:53.423: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:26:55.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:26:55.409: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:26:57.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:26:57.426: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:26:59.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:26:59.415: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:01.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:01.418: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:03.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:03.412: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:05.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:05.416: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:07.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:07.414: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:09.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:09.427: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:11.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:11.414: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:13.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:13.417: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:15.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:15.411: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:17.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:17.414: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:19.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:19.410: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:21.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:21.413: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:23.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:23.409: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:25.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:25.415: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:27.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:27.417: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:29.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:29.415: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:31.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:31.411: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:33.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:33.448: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:35.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:35.411: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:37.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:37.429: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:39.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:39.415: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:41.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:41.415: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:43.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:43.412: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:45.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:45.415: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:47.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:47.427: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:49.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:49.418: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:51.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:51.414: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:53.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:53.963: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:55.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:55.413: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:57.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:57.415: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:27:59.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:27:59.414: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:28:01.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:28:01.426: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:28:03.399: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:28:03.413: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:28:05.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:28:05.411: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:28:07.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:28:07.415: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:28:09.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:28:09.437: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:28:11.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:28:11.472: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:28:13.399: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:28:15.237: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:28:15.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:28:15.561: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:28:17.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:28:17.412: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:28:19.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:28:19.422: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:28:21.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:28:21.415: INFO: Pod pod-with-poststart-exec-hook still exists Feb 19 11:28:23.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 19 11:28:23.421: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:28:23.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qwsmj" for this suite. Feb 19 11:28:47.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:28:47.505: INFO: namespace: e2e-tests-container-lifecycle-hook-qwsmj, resource: bindings, ignored listing per whitelist Feb 19 11:28:47.673: INFO: namespace e2e-tests-container-lifecycle-hook-qwsmj deletion completed in 24.243378743s • [SLOW TEST:310.228 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:28:47.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 19 11:29:08.045: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 19 11:29:08.089: INFO: Pod pod-with-prestop-exec-hook still exists Feb 19 11:29:10.090: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 19 11:29:10.114: INFO: Pod pod-with-prestop-exec-hook still exists Feb 19 11:29:12.091: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 19 11:29:12.138: INFO: Pod pod-with-prestop-exec-hook still exists Feb 19 11:29:14.089: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 19 11:29:14.126: INFO: Pod pod-with-prestop-exec-hook still exists Feb 19 11:29:16.089: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 19 11:29:16.098: INFO: Pod pod-with-prestop-exec-hook still exists Feb 19 11:29:18.089: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 19 11:29:18.105: INFO: Pod pod-with-prestop-exec-hook still exists Feb 19 11:29:20.090: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 19 11:29:20.105: INFO: Pod pod-with-prestop-exec-hook still exists Feb 19 11:29:22.089: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 19 11:29:22.104: INFO: Pod pod-with-prestop-exec-hook still exists Feb 19 11:29:24.089: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 19 11:29:24.115: INFO: Pod pod-with-prestop-exec-hook still exists Feb 19 11:29:26.089: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 19 11:29:26.890: INFO: Pod pod-with-prestop-exec-hook still exists Feb 19 11:29:28.089: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 19 11:29:28.110: INFO: Pod pod-with-prestop-exec-hook still exists Feb 19 11:29:30.089: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 19 11:29:30.104: INFO: Pod pod-with-prestop-exec-hook still exists Feb 19 11:29:32.089: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 19 11:29:32.103: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:29:32.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-tk6xp" for this suite. Feb 19 11:29:58.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:29:58.355: INFO: namespace: e2e-tests-container-lifecycle-hook-tk6xp, resource: bindings, ignored listing per whitelist Feb 19 11:29:58.430: INFO: namespace e2e-tests-container-lifecycle-hook-tk6xp deletion completed in 26.266672738s • [SLOW TEST:70.757 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:29:58.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-2914fd94-530b-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume secrets Feb 19 11:29:58.736: INFO: Waiting up to 5m0s for pod "pod-secrets-2915f35a-530b-11ea-a0a3-0242ac110008" in namespace "e2e-tests-secrets-hwjmj" to be "success or failure" Feb 19 11:29:59.005: INFO: Pod "pod-secrets-2915f35a-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 269.632361ms Feb 19 11:30:01.095: INFO: Pod "pod-secrets-2915f35a-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.359387718s Feb 19 11:30:03.115: INFO: Pod "pod-secrets-2915f35a-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378927038s Feb 19 11:30:06.980: INFO: Pod "pod-secrets-2915f35a-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24464278s Feb 19 11:30:09.002: INFO: Pod "pod-secrets-2915f35a-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.266456343s Feb 19 11:30:11.025: INFO: Pod "pod-secrets-2915f35a-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.289457743s Feb 19 11:30:13.041: INFO: Pod "pod-secrets-2915f35a-530b-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.304903742s STEP: Saw pod success Feb 19 11:30:13.041: INFO: Pod "pod-secrets-2915f35a-530b-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:30:13.046: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-2915f35a-530b-11ea-a0a3-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 19 11:30:13.727: INFO: Waiting for pod pod-secrets-2915f35a-530b-11ea-a0a3-0242ac110008 to disappear Feb 19 11:30:13.955: INFO: Pod pod-secrets-2915f35a-530b-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:30:13.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hwjmj" for this suite. Feb 19 11:30:22.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:30:22.437: INFO: namespace: e2e-tests-secrets-hwjmj, resource: bindings, ignored listing per whitelist Feb 19 11:30:22.636: INFO: namespace e2e-tests-secrets-hwjmj deletion completed in 8.648535637s • [SLOW TEST:24.206 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:30:22.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-3779351e-530b-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume secrets Feb 19 11:30:22.890: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-377b46f8-530b-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-wx679" to be "success or failure" Feb 19 11:30:22.912: INFO: Pod "pod-projected-secrets-377b46f8-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 22.097916ms Feb 19 11:30:25.448: INFO: Pod "pod-projected-secrets-377b46f8-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.558109471s Feb 19 11:30:28.039: INFO: Pod "pod-projected-secrets-377b46f8-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.149124185s Feb 19 11:30:30.067: INFO: Pod "pod-projected-secrets-377b46f8-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.177079463s Feb 19 11:30:32.424: INFO: Pod "pod-projected-secrets-377b46f8-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.534036287s Feb 19 11:30:34.445: INFO: Pod "pod-projected-secrets-377b46f8-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.555481255s Feb 19 11:30:36.468: INFO: Pod "pod-projected-secrets-377b46f8-530b-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.577816481s STEP: Saw pod success Feb 19 11:30:36.468: INFO: Pod "pod-projected-secrets-377b46f8-530b-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:30:36.481: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-377b46f8-530b-11ea-a0a3-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 19 11:30:36.754: INFO: Waiting for pod pod-projected-secrets-377b46f8-530b-11ea-a0a3-0242ac110008 to disappear Feb 19 11:30:36.796: INFO: Pod pod-projected-secrets-377b46f8-530b-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:30:36.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wx679" for this suite. Feb 19 11:30:43.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:30:43.095: INFO: namespace: e2e-tests-projected-wx679, resource: bindings, ignored listing per whitelist Feb 19 11:30:43.258: INFO: namespace e2e-tests-projected-wx679 deletion completed in 6.371303142s • [SLOW TEST:20.619 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:30:43.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 19 11:30:43.465: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43beb7f2-530b-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-2xbcj" to be "success or failure" Feb 19 11:30:43.479: INFO: Pod "downwardapi-volume-43beb7f2-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.036849ms Feb 19 11:30:45.529: INFO: Pod "downwardapi-volume-43beb7f2-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06399548s Feb 19 11:30:47.542: INFO: Pod "downwardapi-volume-43beb7f2-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076635085s Feb 19 11:30:49.572: INFO: Pod "downwardapi-volume-43beb7f2-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10704445s Feb 19 11:30:51.594: INFO: Pod "downwardapi-volume-43beb7f2-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128821611s Feb 19 11:30:53.627: INFO: Pod "downwardapi-volume-43beb7f2-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.161851533s Feb 19 11:30:55.640: INFO: Pod "downwardapi-volume-43beb7f2-530b-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.175223026s STEP: Saw pod success Feb 19 11:30:55.640: INFO: Pod "downwardapi-volume-43beb7f2-530b-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:30:55.644: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-43beb7f2-530b-11ea-a0a3-0242ac110008 container client-container: STEP: delete the pod Feb 19 11:30:56.463: INFO: Waiting for pod downwardapi-volume-43beb7f2-530b-11ea-a0a3-0242ac110008 to disappear Feb 19 11:30:56.547: INFO: Pod downwardapi-volume-43beb7f2-530b-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:30:56.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2xbcj" for this suite. Feb 19 11:31:02.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:31:02.762: INFO: namespace: e2e-tests-downward-api-2xbcj, resource: bindings, ignored listing per whitelist Feb 19 11:31:02.793: INFO: namespace e2e-tests-downward-api-2xbcj deletion completed in 6.225522609s • [SLOW TEST:19.535 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:31:02.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4f6cd055-530b-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume secrets Feb 19 11:31:03.151: INFO: Waiting up to 5m0s for pod "pod-secrets-4f79968e-530b-11ea-a0a3-0242ac110008" in namespace "e2e-tests-secrets-q2t4f" to be "success or failure" Feb 19 11:31:03.176: INFO: Pod "pod-secrets-4f79968e-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 24.34046ms Feb 19 11:31:05.349: INFO: Pod "pod-secrets-4f79968e-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197898186s Feb 19 11:31:07.364: INFO: Pod "pod-secrets-4f79968e-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212951066s Feb 19 11:31:10.441: INFO: Pod "pod-secrets-4f79968e-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.289742796s Feb 19 11:31:12.457: INFO: Pod "pod-secrets-4f79968e-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.305862665s Feb 19 11:31:14.472: INFO: Pod "pod-secrets-4f79968e-530b-11ea-a0a3-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 11.320746169s Feb 19 11:31:16.529: INFO: Pod "pod-secrets-4f79968e-530b-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.378108927s STEP: Saw pod success Feb 19 11:31:16.530: INFO: Pod "pod-secrets-4f79968e-530b-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:31:16.554: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4f79968e-530b-11ea-a0a3-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 19 11:31:16.797: INFO: Waiting for pod pod-secrets-4f79968e-530b-11ea-a0a3-0242ac110008 to disappear Feb 19 11:31:16.816: INFO: Pod pod-secrets-4f79968e-530b-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:31:16.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-q2t4f" for this suite. Feb 19 11:31:22.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:31:23.112: INFO: namespace: e2e-tests-secrets-q2t4f, resource: bindings, ignored listing per whitelist Feb 19 11:31:23.151: INFO: namespace e2e-tests-secrets-q2t4f deletion completed in 6.310515632s • [SLOW TEST:20.357 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:31:23.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 19 11:31:23.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b8b5b25-530b-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-8nv9g" to be "success or failure" Feb 19 11:31:23.484: INFO: Pod "downwardapi-volume-5b8b5b25-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 76.129568ms Feb 19 11:31:25.499: INFO: Pod "downwardapi-volume-5b8b5b25-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090676663s Feb 19 11:31:27.513: INFO: Pod "downwardapi-volume-5b8b5b25-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105234372s Feb 19 11:31:30.073: INFO: Pod "downwardapi-volume-5b8b5b25-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.664449181s Feb 19 11:31:32.088: INFO: Pod "downwardapi-volume-5b8b5b25-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.6797579s Feb 19 11:31:34.138: INFO: Pod "downwardapi-volume-5b8b5b25-530b-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.730295011s STEP: Saw pod success Feb 19 11:31:34.139: INFO: Pod "downwardapi-volume-5b8b5b25-530b-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:31:34.159: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5b8b5b25-530b-11ea-a0a3-0242ac110008 container client-container: STEP: delete the pod Feb 19 11:31:34.326: INFO: Waiting for pod downwardapi-volume-5b8b5b25-530b-11ea-a0a3-0242ac110008 to disappear Feb 19 11:31:34.339: INFO: Pod downwardapi-volume-5b8b5b25-530b-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:31:34.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8nv9g" for this suite. Feb 19 11:31:40.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:31:40.400: INFO: namespace: e2e-tests-downward-api-8nv9g, resource: bindings, ignored listing per whitelist Feb 19 11:31:40.715: INFO: namespace e2e-tests-downward-api-8nv9g deletion completed in 6.366167555s • [SLOW TEST:17.563 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:31:40.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 19 11:32:08.993: INFO: Container started at 2020-02-19 11:31:49 +0000 UTC, pod became ready at 2020-02-19 11:32:07 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:32:08.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-zbtfn" for this suite. Feb 19 11:32:31.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:32:31.160: INFO: namespace: e2e-tests-container-probe-zbtfn, resource: bindings, ignored listing per whitelist Feb 19 11:32:31.172: INFO: namespace e2e-tests-container-probe-zbtfn deletion completed in 22.163462279s • [SLOW TEST:50.456 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:32:31.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:32:43.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-mqmwv" for this suite. Feb 19 11:32:49.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:32:50.125: INFO: namespace: e2e-tests-kubelet-test-mqmwv, resource: bindings, ignored listing per whitelist Feb 19 11:32:50.201: INFO: namespace e2e-tests-kubelet-test-mqmwv deletion completed in 6.644304269s • [SLOW TEST:19.029 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:32:50.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-j58vn [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-j58vn STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-j58vn STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-j58vn STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-j58vn Feb 19 11:33:00.773: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-j58vn, name: ss-0, uid: 9270a383-530b-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Feb 19 11:33:02.479: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-j58vn, name: ss-0, uid: 9270a383-530b-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 19 11:33:02.604: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-j58vn, name: ss-0, uid: 9270a383-530b-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 19 11:33:02.631: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-j58vn STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-j58vn STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-j58vn and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 19 11:33:17.977: INFO: Deleting all statefulset in ns e2e-tests-statefulset-j58vn Feb 19 11:33:17.985: INFO: Scaling statefulset ss to 0 Feb 19 11:33:28.045: INFO: Waiting for statefulset status.replicas updated to 0 Feb 19 11:33:28.051: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:33:28.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-j58vn" for this suite. Feb 19 11:33:36.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:33:36.411: INFO: namespace: e2e-tests-statefulset-j58vn, resource: bindings, ignored listing per whitelist Feb 19 11:33:36.422: INFO: namespace e2e-tests-statefulset-j58vn deletion completed in 8.26679172s • [SLOW TEST:46.220 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:33:36.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 19 11:33:36.708: INFO: Creating deployment "test-recreate-deployment" Feb 19 11:33:36.721: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 19 11:33:36.732: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Feb 19 11:33:38.751: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 19 11:33:38.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708816, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708816, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708816, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:33:40.770: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708816, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708816, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708816, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:33:43.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708816, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708816, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708816, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:33:44.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708816, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708816, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708817, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717708816, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 19 11:33:46.761: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 19 11:33:46.770: INFO: Updating deployment test-recreate-deployment Feb 19 11:33:46.770: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 19 11:33:47.383: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-spkvz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-spkvz/deployments/test-recreate-deployment,UID:ab03dfbf-530b-11ea-a994-fa163e34d433,ResourceVersion:22193010,Generation:2,CreationTimestamp:2020-02-19 11:33:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-19 11:33:47 +0000 UTC 2020-02-19 11:33:47 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-19 11:33:47 +0000 UTC 2020-02-19 11:33:36 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 19 11:33:47.447: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-spkvz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-spkvz/replicasets/test-recreate-deployment-589c4bfd,UID:b12f565b-530b-11ea-a994-fa163e34d433,ResourceVersion:22193008,Generation:1,CreationTimestamp:2020-02-19 11:33:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ab03dfbf-530b-11ea-a994-fa163e34d433 0xc001cd14bf 0xc001cd14d0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 19 11:33:47.447: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 19 11:33:47.448: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-spkvz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-spkvz/replicasets/test-recreate-deployment-5bf7f65dc,UID:ab07fad3-530b-11ea-a994-fa163e34d433,ResourceVersion:22192999,Generation:2,CreationTimestamp:2020-02-19 11:33:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ab03dfbf-530b-11ea-a994-fa163e34d433 0xc001cd15b0 0xc001cd15b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 19 11:33:47.459: INFO: Pod "test-recreate-deployment-589c4bfd-q9n78" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-q9n78,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-spkvz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-spkvz/pods/test-recreate-deployment-589c4bfd-q9n78,UID:b1325257-530b-11ea-a994-fa163e34d433,ResourceVersion:22193011,Generation:0,CreationTimestamp:2020-02-19 11:33:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd b12f565b-530b-11ea-a994-fa163e34d433 0xc001d4889f 0xc001d488b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lf9kz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lf9kz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lf9kz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d48910} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d48930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:33:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:33:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:33:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:33:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-19 11:33:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:33:47.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-spkvz" for this suite. Feb 19 11:33:54.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:33:54.229: INFO: namespace: e2e-tests-deployment-spkvz, resource: bindings, ignored listing per whitelist Feb 19 11:33:54.266: INFO: namespace e2e-tests-deployment-spkvz deletion completed in 6.793679322s • [SLOW TEST:17.844 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:33:54.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 19 11:33:54.361: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 19 11:33:54.368: INFO: Waiting for terminating namespaces to be deleted... Feb 19 11:33:54.372: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 19 11:33:54.386: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 19 11:33:54.386: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 19 11:33:54.386: INFO: Container coredns ready: true, restart count 0 Feb 19 11:33:54.386: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 19 11:33:54.386: INFO: Container kube-proxy ready: true, restart count 0 Feb 19 11:33:54.386: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 19 11:33:54.386: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 19 11:33:54.386: INFO: Container weave ready: true, restart count 0 Feb 19 11:33:54.386: INFO: Container weave-npc ready: true, restart count 0 Feb 19 11:33:54.386: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 19 11:33:54.386: INFO: Container coredns ready: true, restart count 0 Feb 19 11:33:54.386: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 19 11:33:54.386: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f4ca83523eabae], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:33:55.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-x4ckg" for this suite. Feb 19 11:34:02.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:34:02.238: INFO: namespace: e2e-tests-sched-pred-x4ckg, resource: bindings, ignored listing per whitelist Feb 19 11:34:02.307: INFO: namespace e2e-tests-sched-pred-x4ckg deletion completed in 6.794515791s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:8.041 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:34:02.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 19 11:34:02.620: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba63a4f3-530b-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-k99kx" to be "success or failure" Feb 19 11:34:02.675: INFO: Pod "downwardapi-volume-ba63a4f3-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 54.516392ms Feb 19 11:34:04.692: INFO: Pod "downwardapi-volume-ba63a4f3-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071536163s Feb 19 11:34:06.719: INFO: Pod "downwardapi-volume-ba63a4f3-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098255183s Feb 19 11:34:08.750: INFO: Pod "downwardapi-volume-ba63a4f3-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129763354s Feb 19 11:34:10.777: INFO: Pod "downwardapi-volume-ba63a4f3-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156479031s Feb 19 11:34:12.907: INFO: Pod "downwardapi-volume-ba63a4f3-530b-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.286096541s STEP: Saw pod success Feb 19 11:34:12.907: INFO: Pod "downwardapi-volume-ba63a4f3-530b-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:34:12.931: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ba63a4f3-530b-11ea-a0a3-0242ac110008 container client-container: STEP: delete the pod Feb 19 11:34:13.480: INFO: Waiting for pod downwardapi-volume-ba63a4f3-530b-11ea-a0a3-0242ac110008 to disappear Feb 19 11:34:13.508: INFO: Pod downwardapi-volume-ba63a4f3-530b-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:34:13.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k99kx" for this suite. Feb 19 11:34:19.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:34:19.697: INFO: namespace: e2e-tests-projected-k99kx, resource: bindings, ignored listing per whitelist Feb 19 11:34:19.811: INFO: namespace e2e-tests-projected-k99kx deletion completed in 6.286516249s • [SLOW TEST:17.503 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:34:19.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 19 11:34:20.036: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4d32e24-530b-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-k4f6v" to be "success or failure" Feb 19 11:34:20.059: INFO: Pod "downwardapi-volume-c4d32e24-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 22.84236ms Feb 19 11:34:22.429: INFO: Pod "downwardapi-volume-c4d32e24-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392390394s Feb 19 11:34:24.445: INFO: Pod "downwardapi-volume-c4d32e24-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.408811948s Feb 19 11:34:26.544: INFO: Pod "downwardapi-volume-c4d32e24-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.508172315s Feb 19 11:34:28.579: INFO: Pod "downwardapi-volume-c4d32e24-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542397815s Feb 19 11:34:30.609: INFO: Pod "downwardapi-volume-c4d32e24-530b-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.572397976s STEP: Saw pod success Feb 19 11:34:30.609: INFO: Pod "downwardapi-volume-c4d32e24-530b-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:34:30.640: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c4d32e24-530b-11ea-a0a3-0242ac110008 container client-container: STEP: delete the pod Feb 19 11:34:31.441: INFO: Waiting for pod downwardapi-volume-c4d32e24-530b-11ea-a0a3-0242ac110008 to disappear Feb 19 11:34:31.708: INFO: Pod downwardapi-volume-c4d32e24-530b-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:34:31.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k4f6v" for this suite. Feb 19 11:34:37.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:34:37.937: INFO: namespace: e2e-tests-projected-k4f6v, resource: bindings, ignored listing per whitelist Feb 19 11:34:37.937: INFO: namespace e2e-tests-projected-k4f6v deletion completed in 6.217261758s • [SLOW TEST:18.125 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:34:37.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-cfae0533-530b-11ea-a0a3-0242ac110008 STEP: Creating secret with name secret-projected-all-test-volume-cfae04fd-530b-11ea-a0a3-0242ac110008 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 19 11:34:38.377: INFO: Waiting up to 5m0s for pod "projected-volume-cfae0477-530b-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-47wxj" to be "success or failure" Feb 19 11:34:38.425: INFO: Pod "projected-volume-cfae0477-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 48.136494ms Feb 19 11:34:40.446: INFO: Pod "projected-volume-cfae0477-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069565571s Feb 19 11:34:42.459: INFO: Pod "projected-volume-cfae0477-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082508317s Feb 19 11:34:44.623: INFO: Pod "projected-volume-cfae0477-530b-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.24596525s Feb 19 11:34:46.649: INFO: Pod "projected-volume-cfae0477-530b-11ea-a0a3-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.271829368s Feb 19 11:34:48.677: INFO: Pod "projected-volume-cfae0477-530b-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.300370603s STEP: Saw pod success Feb 19 11:34:48.677: INFO: Pod "projected-volume-cfae0477-530b-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:34:48.692: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-cfae0477-530b-11ea-a0a3-0242ac110008 container projected-all-volume-test: STEP: delete the pod Feb 19 11:34:50.109: INFO: Waiting for pod projected-volume-cfae0477-530b-11ea-a0a3-0242ac110008 to disappear Feb 19 11:34:50.419: INFO: Pod projected-volume-cfae0477-530b-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:34:50.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-47wxj" for this suite. Feb 19 11:34:58.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:34:58.708: INFO: namespace: e2e-tests-projected-47wxj, resource: bindings, ignored listing per whitelist Feb 19 11:34:58.862: INFO: namespace e2e-tests-projected-47wxj deletion completed in 8.381474569s • [SLOW TEST:20.925 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:34:58.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Feb 19 11:34:59.580: INFO: Waiting up to 5m0s for pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8" in namespace "e2e-tests-svcaccounts-b9kq7" to be "success or failure" Feb 19 11:34:59.613: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.660308ms Feb 19 11:35:01.634: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053862834s Feb 19 11:35:03.648: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0670583s Feb 19 11:35:05.763: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182655373s Feb 19 11:35:08.314: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.73308774s Feb 19 11:35:10.337: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.756631371s Feb 19 11:35:13.010: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.429471288s Feb 19 11:35:15.060: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.479425142s Feb 19 11:35:17.073: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.492855704s STEP: Saw pod success Feb 19 11:35:17.073: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8" satisfied condition "success or failure" Feb 19 11:35:17.077: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8 container token-test: STEP: delete the pod Feb 19 11:35:17.371: INFO: Waiting for pod pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8 to disappear Feb 19 11:35:17.393: INFO: Pod pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-sspz8 no longer exists STEP: Creating a pod to test consume service account root CA Feb 19 11:35:17.407: INFO: Waiting up to 5m0s for pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv" in namespace "e2e-tests-svcaccounts-b9kq7" to be "success or failure" Feb 19 11:35:17.518: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv": Phase="Pending", Reason="", readiness=false. Elapsed: 111.66169ms Feb 19 11:35:19.909: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.502792103s Feb 19 11:35:21.930: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.523626377s Feb 19 11:35:24.167: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.760143997s Feb 19 11:35:26.265: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.858259283s Feb 19 11:35:28.298: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.891456213s Feb 19 11:35:30.321: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.914107559s Feb 19 11:35:32.750: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv": Phase="Pending", Reason="", readiness=false. Elapsed: 15.343022006s Feb 19 11:35:34.786: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv": Phase="Pending", Reason="", readiness=false. Elapsed: 17.379579072s Feb 19 11:35:36.798: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.391316335s STEP: Saw pod success Feb 19 11:35:36.798: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv" satisfied condition "success or failure" Feb 19 11:35:36.803: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv container root-ca-test: STEP: delete the pod Feb 19 11:35:37.263: INFO: Waiting for pod pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv to disappear Feb 19 11:35:37.702: INFO: Pod pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-5qblv no longer exists STEP: Creating a pod to test consume service account namespace Feb 19 11:35:37.837: INFO: Waiting up to 5m0s for pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7" in namespace "e2e-tests-svcaccounts-b9kq7" to be "success or failure" Feb 19 11:35:37.876: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.978833ms Feb 19 11:35:39.913: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075997991s Feb 19 11:35:41.934: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097336912s Feb 19 11:35:43.958: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120718902s Feb 19 11:35:45.993: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15638593s Feb 19 11:35:48.035: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.197820544s Feb 19 11:35:50.043: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.206136805s Feb 19 11:35:52.058: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.22090433s Feb 19 11:35:54.079: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.241833392s Feb 19 11:35:56.089: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.252230918s STEP: Saw pod success Feb 19 11:35:56.089: INFO: Pod "pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7" satisfied condition "success or failure" Feb 19 11:35:56.095: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7 container namespace-test: STEP: delete the pod Feb 19 11:35:57.155: INFO: Waiting for pod pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7 to disappear Feb 19 11:35:57.181: INFO: Pod pod-service-account-dc635d3c-530b-11ea-a0a3-0242ac110008-qq4c7 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:35:57.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-b9kq7" for this suite. Feb 19 11:36:03.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:36:03.401: INFO: namespace: e2e-tests-svcaccounts-b9kq7, resource: bindings, ignored listing per whitelist Feb 19 11:36:03.449: INFO: namespace e2e-tests-svcaccounts-b9kq7 deletion completed in 6.25770464s • [SLOW TEST:64.587 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:36:03.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Feb 19 11:36:03.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w4zfk' Feb 19 11:36:06.056: INFO: stderr: "" Feb 19 11:36:06.056: INFO: stdout: "pod/pause created\n" Feb 19 11:36:06.056: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 19 11:36:06.056: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-w4zfk" to be "running and ready" Feb 19 11:36:06.067: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.161487ms Feb 19 11:36:08.092: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036090885s Feb 19 11:36:10.152: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095779492s Feb 19 11:36:12.181: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12486985s Feb 19 11:36:14.285: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.228787527s Feb 19 11:36:16.327: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.271265915s Feb 19 11:36:16.328: INFO: Pod "pause" satisfied condition "running and ready" Feb 19 11:36:16.328: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Feb 19 11:36:16.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-w4zfk' Feb 19 11:36:16.533: INFO: stderr: "" Feb 19 11:36:16.533: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 19 11:36:16.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-w4zfk' Feb 19 11:36:16.666: INFO: stderr: "" Feb 19 11:36:16.666: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 19 11:36:16.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-w4zfk' Feb 19 11:36:16.787: INFO: stderr: "" Feb 19 11:36:16.788: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 19 11:36:16.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-w4zfk' Feb 19 11:36:16.968: INFO: stderr: "" Feb 19 11:36:16.968: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Feb 19 11:36:16.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-w4zfk' Feb 19 11:36:17.167: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 19 11:36:17.167: INFO: stdout: "pod \"pause\" force deleted\n" Feb 19 11:36:17.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-w4zfk' Feb 19 11:36:17.295: INFO: stderr: "No resources found.\n" Feb 19 11:36:17.295: INFO: stdout: "" Feb 19 11:36:17.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-w4zfk -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 19 11:36:17.389: INFO: stderr: "" Feb 19 11:36:17.390: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:36:17.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w4zfk" for this suite. Feb 19 11:36:23.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:36:23.513: INFO: namespace: e2e-tests-kubectl-w4zfk, resource: bindings, ignored listing per whitelist Feb 19 11:36:23.630: INFO: namespace e2e-tests-kubectl-w4zfk deletion completed in 6.233488186s • [SLOW TEST:20.180 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:36:23.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 19 11:36:24.086: INFO: Number of nodes with available pods: 0 Feb 19 11:36:24.086: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:26.241: INFO: Number of nodes with available pods: 0 Feb 19 11:36:26.241: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:27.224: INFO: Number of nodes with available pods: 0 Feb 19 11:36:27.224: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:28.111: INFO: Number of nodes with available pods: 0 Feb 19 11:36:28.111: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:29.110: INFO: Number of nodes with available pods: 0 Feb 19 11:36:29.110: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:30.637: INFO: Number of nodes with available pods: 0 Feb 19 11:36:30.637: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:31.163: INFO: Number of nodes with available pods: 0 Feb 19 11:36:31.163: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:32.108: INFO: Number of nodes with available pods: 0 Feb 19 11:36:32.108: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:33.101: INFO: Number of nodes with available pods: 0 Feb 19 11:36:33.101: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:34.216: INFO: Number of nodes with available pods: 1 Feb 19 11:36:34.216: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 19 11:36:34.332: INFO: Number of nodes with available pods: 0 Feb 19 11:36:34.332: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:35.357: INFO: Number of nodes with available pods: 0 Feb 19 11:36:35.357: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:36.366: INFO: Number of nodes with available pods: 0 Feb 19 11:36:36.366: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:37.628: INFO: Number of nodes with available pods: 0 Feb 19 11:36:37.628: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:38.420: INFO: Number of nodes with available pods: 0 Feb 19 11:36:38.420: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:39.595: INFO: Number of nodes with available pods: 0 Feb 19 11:36:39.596: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:40.361: INFO: Number of nodes with available pods: 0 Feb 19 11:36:40.361: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:41.600: INFO: Number of nodes with available pods: 0 Feb 19 11:36:41.600: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:42.666: INFO: Number of nodes with available pods: 0 Feb 19 11:36:42.666: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:43.587: INFO: Number of nodes with available pods: 0 Feb 19 11:36:43.588: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:44.367: INFO: Number of nodes with available pods: 0 Feb 19 11:36:44.367: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:45.358: INFO: Number of nodes with available pods: 0 Feb 19 11:36:45.358: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:46.357: INFO: Number of nodes with available pods: 0 Feb 19 11:36:46.357: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:47.358: INFO: Number of nodes with available pods: 0 Feb 19 11:36:47.358: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:48.353: INFO: Number of nodes with available pods: 0 Feb 19 11:36:48.353: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:49.365: INFO: Number of nodes with available pods: 0 Feb 19 11:36:49.365: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:50.371: INFO: Number of nodes with available pods: 0 Feb 19 11:36:50.371: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:51.358: INFO: Number of nodes with available pods: 0 Feb 19 11:36:51.358: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:52.390: INFO: Number of nodes with available pods: 0 Feb 19 11:36:52.390: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:53.352: INFO: Number of nodes with available pods: 0 Feb 19 11:36:53.352: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:54.392: INFO: Number of nodes with available pods: 0 Feb 19 11:36:54.392: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:55.364: INFO: Number of nodes with available pods: 0 Feb 19 11:36:55.364: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:56.351: INFO: Number of nodes with available pods: 0 Feb 19 11:36:56.351: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:58.425: INFO: Number of nodes with available pods: 0 Feb 19 11:36:58.425: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:36:59.647: INFO: Number of nodes with available pods: 0 Feb 19 11:36:59.647: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:37:00.372: INFO: Number of nodes with available pods: 0 Feb 19 11:37:00.372: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:37:01.364: INFO: Number of nodes with available pods: 0 Feb 19 11:37:01.364: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:37:02.364: INFO: Number of nodes with available pods: 1 Feb 19 11:37:02.364: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-sm6qt, will wait for the garbage collector to delete the pods Feb 19 11:37:02.497: INFO: Deleting DaemonSet.extensions daemon-set took: 59.906257ms Feb 19 11:37:02.698: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.807823ms Feb 19 11:37:09.727: INFO: Number of nodes with available pods: 0 Feb 19 11:37:09.727: INFO: Number of running nodes: 0, number of available pods: 0 Feb 19 11:37:09.740: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-sm6qt/daemonsets","resourceVersion":"22193507"},"items":null} Feb 19 11:37:09.748: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-sm6qt/pods","resourceVersion":"22193507"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:37:09.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-sm6qt" for this suite. Feb 19 11:37:17.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:37:17.925: INFO: namespace: e2e-tests-daemonsets-sm6qt, resource: bindings, ignored listing per whitelist Feb 19 11:37:17.994: INFO: namespace e2e-tests-daemonsets-sm6qt deletion completed in 8.225157378s • [SLOW TEST:54.364 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:37:17.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 19 11:37:30.194: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-2ef8c324-530c-11ea-a0a3-0242ac110008,GenerateName:,Namespace:e2e-tests-events-nt9rz,SelfLink:/api/v1/namespaces/e2e-tests-events-nt9rz/pods/send-events-2ef8c324-530c-11ea-a0a3-0242ac110008,UID:2eff3eae-530c-11ea-a994-fa163e34d433,ResourceVersion:22193559,Generation:0,CreationTimestamp:2020-02-19 11:37:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 98938081,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-29qgm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-29qgm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-29qgm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001920a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001920a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:37:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:37:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:37:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:37:18 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-19 11:37:18 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-19 11:37:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://83732e60580722ea7d4bbb33643852ece8ceac96ec2ab3af4e574d201a2519df}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 19 11:37:32.216: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 19 11:37:34.238: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:37:34.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-nt9rz" for this suite. Feb 19 11:38:14.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:38:14.682: INFO: namespace: e2e-tests-events-nt9rz, resource: bindings, ignored listing per whitelist Feb 19 11:38:15.194: INFO: namespace e2e-tests-events-nt9rz deletion completed in 40.917421577s • [SLOW TEST:57.200 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:38:15.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 19 11:38:15.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-km6mj' Feb 19 11:38:15.840: INFO: stderr: "" Feb 19 11:38:15.840: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 19 11:38:17.169: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:38:17.169: INFO: Found 0 / 1 Feb 19 11:38:18.285: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:38:18.285: INFO: Found 0 / 1 Feb 19 11:38:18.856: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:38:18.856: INFO: Found 0 / 1 Feb 19 11:38:19.883: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:38:19.883: INFO: Found 0 / 1 Feb 19 11:38:21.720: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:38:21.720: INFO: Found 0 / 1 Feb 19 11:38:21.989: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:38:21.989: INFO: Found 0 / 1 Feb 19 11:38:22.974: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:38:22.974: INFO: Found 0 / 1 Feb 19 11:38:23.854: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:38:23.854: INFO: Found 1 / 1 Feb 19 11:38:23.854: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 19 11:38:23.864: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:38:23.864: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 19 11:38:23.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-4qtf9 --namespace=e2e-tests-kubectl-km6mj -p {"metadata":{"annotations":{"x":"y"}}}' Feb 19 11:38:23.994: INFO: stderr: "" Feb 19 11:38:23.994: INFO: stdout: "pod/redis-master-4qtf9 patched\n" STEP: checking annotations Feb 19 11:38:24.006: INFO: Selector matched 1 pods for map[app:redis] Feb 19 11:38:24.006: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:38:24.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-km6mj" for this suite. Feb 19 11:38:48.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:38:48.166: INFO: namespace: e2e-tests-kubectl-km6mj, resource: bindings, ignored listing per whitelist Feb 19 11:38:48.208: INFO: namespace e2e-tests-kubectl-km6mj deletion completed in 24.19245408s • [SLOW TEST:33.013 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:38:48.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 19 11:38:48.407: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:39:05.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-8wcth" for this suite. Feb 19 11:39:11.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:39:11.405: INFO: namespace: e2e-tests-init-container-8wcth, resource: bindings, ignored listing per whitelist Feb 19 11:39:11.504: INFO: namespace e2e-tests-init-container-8wcth deletion completed in 6.328797318s • [SLOW TEST:23.295 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:39:11.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-5vzg5/configmap-test-72b0e89d-530c-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 19 11:39:11.749: INFO: Waiting up to 5m0s for pod "pod-configmaps-72b3831d-530c-11ea-a0a3-0242ac110008" in namespace "e2e-tests-configmap-5vzg5" to be "success or failure" Feb 19 11:39:11.756: INFO: Pod "pod-configmaps-72b3831d-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141529ms Feb 19 11:39:13.905: INFO: Pod "pod-configmaps-72b3831d-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155425053s Feb 19 11:39:15.950: INFO: Pod "pod-configmaps-72b3831d-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.20083103s Feb 19 11:39:17.963: INFO: Pod "pod-configmaps-72b3831d-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21339112s Feb 19 11:39:20.651: INFO: Pod "pod-configmaps-72b3831d-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.902024604s Feb 19 11:39:22.670: INFO: Pod "pod-configmaps-72b3831d-530c-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.920932056s STEP: Saw pod success Feb 19 11:39:22.670: INFO: Pod "pod-configmaps-72b3831d-530c-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:39:22.675: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-72b3831d-530c-11ea-a0a3-0242ac110008 container env-test: STEP: delete the pod Feb 19 11:39:22.851: INFO: Waiting for pod pod-configmaps-72b3831d-530c-11ea-a0a3-0242ac110008 to disappear Feb 19 11:39:22.881: INFO: Pod pod-configmaps-72b3831d-530c-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:39:22.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5vzg5" for this suite. Feb 19 11:39:28.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:39:29.089: INFO: namespace: e2e-tests-configmap-5vzg5, resource: bindings, ignored listing per whitelist Feb 19 11:39:29.102: INFO: namespace e2e-tests-configmap-5vzg5 deletion completed in 6.208826488s • [SLOW TEST:17.598 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:39:29.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 19 11:39:29.455: INFO: Waiting up to 5m0s for pod "downward-api-7d304422-530c-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-mljbf" to be "success or failure" Feb 19 11:39:29.466: INFO: Pod "downward-api-7d304422-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.642451ms Feb 19 11:39:31.501: INFO: Pod "downward-api-7d304422-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045812007s Feb 19 11:39:33.518: INFO: Pod "downward-api-7d304422-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06320858s Feb 19 11:39:35.532: INFO: Pod "downward-api-7d304422-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076478234s Feb 19 11:39:37.548: INFO: Pod "downward-api-7d304422-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092718821s Feb 19 11:39:39.572: INFO: Pod "downward-api-7d304422-530c-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117274906s STEP: Saw pod success Feb 19 11:39:39.573: INFO: Pod "downward-api-7d304422-530c-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:39:39.586: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7d304422-530c-11ea-a0a3-0242ac110008 container dapi-container: STEP: delete the pod Feb 19 11:39:39.674: INFO: Waiting for pod downward-api-7d304422-530c-11ea-a0a3-0242ac110008 to disappear Feb 19 11:39:39.780: INFO: Pod downward-api-7d304422-530c-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:39:39.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mljbf" for this suite. Feb 19 11:39:45.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:39:46.046: INFO: namespace: e2e-tests-downward-api-mljbf, resource: bindings, ignored listing per whitelist Feb 19 11:39:46.051: INFO: namespace e2e-tests-downward-api-mljbf deletion completed in 6.257979897s • [SLOW TEST:16.948 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:39:46.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 19 11:39:46.266: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 19 11:39:46.373: INFO: Number of nodes with available pods: 0 Feb 19 11:39:46.373: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:39:47.959: INFO: Number of nodes with available pods: 0 Feb 19 11:39:47.959: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:39:48.431: INFO: Number of nodes with available pods: 0 Feb 19 11:39:48.431: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:39:49.408: INFO: Number of nodes with available pods: 0 Feb 19 11:39:49.408: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:39:50.401: INFO: Number of nodes with available pods: 0 Feb 19 11:39:50.401: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:39:51.552: INFO: Number of nodes with available pods: 0 Feb 19 11:39:51.553: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:39:52.396: INFO: Number of nodes with available pods: 0 Feb 19 11:39:52.396: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:39:53.537: INFO: Number of nodes with available pods: 0 Feb 19 11:39:53.537: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:39:54.400: INFO: Number of nodes with available pods: 0 Feb 19 11:39:54.400: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:39:55.389: INFO: Number of nodes with available pods: 0 Feb 19 11:39:55.389: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:39:56.394: INFO: Number of nodes with available pods: 0 Feb 19 11:39:56.394: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:39:57.399: INFO: Number of nodes with available pods: 1 Feb 19 11:39:57.399: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 19 11:39:57.583: INFO: Wrong image for pod: daemon-set-2rdkr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 19 11:39:58.637: INFO: Wrong image for pod: daemon-set-2rdkr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 19 11:39:59.621: INFO: Wrong image for pod: daemon-set-2rdkr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 19 11:40:02.324: INFO: Wrong image for pod: daemon-set-2rdkr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 19 11:40:02.724: INFO: Wrong image for pod: daemon-set-2rdkr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 19 11:40:03.633: INFO: Wrong image for pod: daemon-set-2rdkr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 19 11:40:04.637: INFO: Wrong image for pod: daemon-set-2rdkr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 19 11:40:05.621: INFO: Wrong image for pod: daemon-set-2rdkr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 19 11:40:06.681: INFO: Wrong image for pod: daemon-set-2rdkr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 19 11:40:06.681: INFO: Pod daemon-set-2rdkr is not available Feb 19 11:40:07.617: INFO: Pod daemon-set-8srq9 is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 19 11:40:07.633: INFO: Number of nodes with available pods: 0 Feb 19 11:40:07.633: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:40:09.137: INFO: Number of nodes with available pods: 0 Feb 19 11:40:09.137: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:40:09.698: INFO: Number of nodes with available pods: 0 Feb 19 11:40:09.698: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:40:10.651: INFO: Number of nodes with available pods: 0 Feb 19 11:40:10.651: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:40:11.657: INFO: Number of nodes with available pods: 0 Feb 19 11:40:11.657: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:40:12.657: INFO: Number of nodes with available pods: 0 Feb 19 11:40:12.657: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:40:13.661: INFO: Number of nodes with available pods: 0 Feb 19 11:40:13.662: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:40:14.740: INFO: Number of nodes with available pods: 0 Feb 19 11:40:14.741: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:40:15.752: INFO: Number of nodes with available pods: 0 Feb 19 11:40:15.753: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:40:16.657: INFO: Number of nodes with available pods: 0 Feb 19 11:40:16.657: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 19 11:40:17.655: INFO: Number of nodes with available pods: 1 Feb 19 11:40:17.656: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-p7lbz, will wait for the garbage collector to delete the pods Feb 19 11:40:17.820: INFO: Deleting DaemonSet.extensions daemon-set took: 26.643709ms Feb 19 11:40:20.721: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.901522113s Feb 19 11:40:32.653: INFO: Number of nodes with available pods: 0 Feb 19 11:40:32.653: INFO: Number of running nodes: 0, number of available pods: 0 Feb 19 11:40:32.672: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-p7lbz/daemonsets","resourceVersion":"22193953"},"items":null} Feb 19 11:40:32.682: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-p7lbz/pods","resourceVersion":"22193953"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:40:32.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-p7lbz" for this suite. Feb 19 11:40:38.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:40:38.857: INFO: namespace: e2e-tests-daemonsets-p7lbz, resource: bindings, ignored listing per whitelist Feb 19 11:40:38.857: INFO: namespace e2e-tests-daemonsets-p7lbz deletion completed in 6.158309551s • [SLOW TEST:52.806 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:40:38.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-hwmgf.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hwmgf.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hwmgf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-hwmgf.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hwmgf.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hwmgf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 19 11:40:57.529: INFO: DNS probes using e2e-tests-dns-hwmgf/dns-test-a6c3eb25-530c-11ea-a0a3-0242ac110008 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:40:57.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-hwmgf" for this suite. Feb 19 11:41:05.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:41:05.921: INFO: namespace: e2e-tests-dns-hwmgf, resource: bindings, ignored listing per whitelist Feb 19 11:41:05.942: INFO: namespace e2e-tests-dns-hwmgf deletion completed in 8.252538968s • [SLOW TEST:27.086 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:41:05.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 19 11:41:06.144: INFO: Waiting up to 5m0s for pod "pod-b6e3cc02-530c-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-k9fkk" to be "success or failure" Feb 19 11:41:06.157: INFO: Pod "pod-b6e3cc02-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.074614ms Feb 19 11:41:08.168: INFO: Pod "pod-b6e3cc02-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024631505s Feb 19 11:41:10.195: INFO: Pod "pod-b6e3cc02-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05130733s Feb 19 11:41:12.391: INFO: Pod "pod-b6e3cc02-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.247460237s Feb 19 11:41:14.424: INFO: Pod "pod-b6e3cc02-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.280508911s Feb 19 11:41:16.441: INFO: Pod "pod-b6e3cc02-530c-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.297307572s STEP: Saw pod success Feb 19 11:41:16.441: INFO: Pod "pod-b6e3cc02-530c-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:41:16.455: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b6e3cc02-530c-11ea-a0a3-0242ac110008 container test-container: STEP: delete the pod Feb 19 11:41:16.584: INFO: Waiting for pod pod-b6e3cc02-530c-11ea-a0a3-0242ac110008 to disappear Feb 19 11:41:16.593: INFO: Pod pod-b6e3cc02-530c-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:41:16.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-k9fkk" for this suite. Feb 19 11:41:22.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:41:23.026: INFO: namespace: e2e-tests-emptydir-k9fkk, resource: bindings, ignored listing per whitelist Feb 19 11:41:23.032: INFO: namespace e2e-tests-emptydir-k9fkk deletion completed in 6.418226665s • [SLOW TEST:17.090 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:41:23.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:41:23.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-wjgjd" for this suite. Feb 19 11:41:29.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:41:29.656: INFO: namespace: e2e-tests-services-wjgjd, resource: bindings, ignored listing per whitelist Feb 19 11:41:29.699: INFO: namespace e2e-tests-services-wjgjd deletion completed in 6.33019042s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.666 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:41:29.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-c50c2267-530c-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 19 11:41:29.950: INFO: Waiting up to 5m0s for pod "pod-configmaps-c5104de0-530c-11ea-a0a3-0242ac110008" in namespace "e2e-tests-configmap-2mb5r" to be "success or failure" Feb 19 11:41:29.963: INFO: Pod "pod-configmaps-c5104de0-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.56102ms Feb 19 11:41:32.778: INFO: Pod "pod-configmaps-c5104de0-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.82800622s Feb 19 11:41:34.796: INFO: Pod "pod-configmaps-c5104de0-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.846253109s Feb 19 11:41:36.958: INFO: Pod "pod-configmaps-c5104de0-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.008295866s Feb 19 11:41:38.980: INFO: Pod "pod-configmaps-c5104de0-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.029814549s Feb 19 11:41:40.994: INFO: Pod "pod-configmaps-c5104de0-530c-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.044650222s STEP: Saw pod success Feb 19 11:41:40.994: INFO: Pod "pod-configmaps-c5104de0-530c-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:41:41.000: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c5104de0-530c-11ea-a0a3-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 19 11:41:41.674: INFO: Waiting for pod pod-configmaps-c5104de0-530c-11ea-a0a3-0242ac110008 to disappear Feb 19 11:41:41.935: INFO: Pod pod-configmaps-c5104de0-530c-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:41:41.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2mb5r" for this suite. Feb 19 11:41:48.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:41:48.397: INFO: namespace: e2e-tests-configmap-2mb5r, resource: bindings, ignored listing per whitelist Feb 19 11:41:48.416: INFO: namespace e2e-tests-configmap-2mb5r deletion completed in 6.457669756s • [SLOW TEST:18.716 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:41:48.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-d061c4b4-530c-11ea-a0a3-0242ac110008 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:42:03.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qdwf5" for this suite. Feb 19 11:42:27.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:42:27.731: INFO: namespace: e2e-tests-configmap-qdwf5, resource: bindings, ignored listing per whitelist Feb 19 11:42:27.750: INFO: namespace e2e-tests-configmap-qdwf5 deletion completed in 24.636020137s • [SLOW TEST:39.334 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:42:27.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-e7b0272a-530c-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 19 11:42:28.016: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e7b0d593-530c-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-j4xj9" to be "success or failure" Feb 19 11:42:28.037: INFO: Pod "pod-projected-configmaps-e7b0d593-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.161616ms Feb 19 11:42:30.074: INFO: Pod "pod-projected-configmaps-e7b0d593-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05814533s Feb 19 11:42:32.083: INFO: Pod "pod-projected-configmaps-e7b0d593-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067647308s Feb 19 11:42:34.257: INFO: Pod "pod-projected-configmaps-e7b0d593-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.241627253s Feb 19 11:42:36.279: INFO: Pod "pod-projected-configmaps-e7b0d593-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263455164s Feb 19 11:42:38.303: INFO: Pod "pod-projected-configmaps-e7b0d593-530c-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.287451896s STEP: Saw pod success Feb 19 11:42:38.304: INFO: Pod "pod-projected-configmaps-e7b0d593-530c-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:42:38.314: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e7b0d593-530c-11ea-a0a3-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 19 11:42:38.489: INFO: Waiting for pod pod-projected-configmaps-e7b0d593-530c-11ea-a0a3-0242ac110008 to disappear Feb 19 11:42:38.508: INFO: Pod pod-projected-configmaps-e7b0d593-530c-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:42:38.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j4xj9" for this suite. Feb 19 11:42:44.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:42:44.933: INFO: namespace: e2e-tests-projected-j4xj9, resource: bindings, ignored listing per whitelist Feb 19 11:42:44.944: INFO: namespace e2e-tests-projected-j4xj9 deletion completed in 6.426299802s • [SLOW TEST:17.193 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:42:44.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 19 11:42:45.355: INFO: Waiting up to 5m0s for pod "pod-f202d47e-530c-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-xx268" to be "success or failure" Feb 19 11:42:45.392: INFO: Pod "pod-f202d47e-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 36.451334ms Feb 19 11:42:47.404: INFO: Pod "pod-f202d47e-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048466521s Feb 19 11:42:49.416: INFO: Pod "pod-f202d47e-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061285008s Feb 19 11:42:51.430: INFO: Pod "pod-f202d47e-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074592314s Feb 19 11:42:53.605: INFO: Pod "pod-f202d47e-530c-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.250307789s Feb 19 11:42:55.620: INFO: Pod "pod-f202d47e-530c-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.264822265s STEP: Saw pod success Feb 19 11:42:55.620: INFO: Pod "pod-f202d47e-530c-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:42:55.624: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f202d47e-530c-11ea-a0a3-0242ac110008 container test-container: STEP: delete the pod Feb 19 11:42:56.272: INFO: Waiting for pod pod-f202d47e-530c-11ea-a0a3-0242ac110008 to disappear Feb 19 11:42:56.552: INFO: Pod pod-f202d47e-530c-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:42:56.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xx268" for this suite. Feb 19 11:43:02.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:43:02.942: INFO: namespace: e2e-tests-emptydir-xx268, resource: bindings, ignored listing per whitelist Feb 19 11:43:02.981: INFO: namespace e2e-tests-emptydir-xx268 deletion completed in 6.409257879s • [SLOW TEST:18.036 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:43:02.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Feb 19 11:43:03.327: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:43:03.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qqgkt" for this suite. Feb 19 11:43:09.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:43:09.670: INFO: namespace: e2e-tests-kubectl-qqgkt, resource: bindings, ignored listing per whitelist Feb 19 11:43:09.757: INFO: namespace e2e-tests-kubectl-qqgkt deletion completed in 6.272411852s • [SLOW TEST:6.776 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:43:09.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-00b6e6ad-530d-11ea-a0a3-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 19 11:43:10.009: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-00b7a0ee-530d-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-9rh6x" to be "success or failure" Feb 19 11:43:10.017: INFO: Pod "pod-projected-configmaps-00b7a0ee-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.273307ms Feb 19 11:43:12.033: INFO: Pod "pod-projected-configmaps-00b7a0ee-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024174085s Feb 19 11:43:14.054: INFO: Pod "pod-projected-configmaps-00b7a0ee-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045382083s Feb 19 11:43:16.085: INFO: Pod "pod-projected-configmaps-00b7a0ee-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076463329s Feb 19 11:43:18.102: INFO: Pod "pod-projected-configmaps-00b7a0ee-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093521879s Feb 19 11:43:20.291: INFO: Pod "pod-projected-configmaps-00b7a0ee-530d-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.28183388s STEP: Saw pod success Feb 19 11:43:20.291: INFO: Pod "pod-projected-configmaps-00b7a0ee-530d-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:43:20.300: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-00b7a0ee-530d-11ea-a0a3-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 19 11:43:20.685: INFO: Waiting for pod pod-projected-configmaps-00b7a0ee-530d-11ea-a0a3-0242ac110008 to disappear Feb 19 11:43:20.765: INFO: Pod pod-projected-configmaps-00b7a0ee-530d-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:43:20.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9rh6x" for this suite. Feb 19 11:43:28.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:43:29.006: INFO: namespace: e2e-tests-projected-9rh6x, resource: bindings, ignored listing per whitelist Feb 19 11:43:29.043: INFO: namespace e2e-tests-projected-9rh6x deletion completed in 8.263260707s • [SLOW TEST:19.285 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:43:29.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Feb 19 11:43:29.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 19 11:43:29.720: INFO: stderr: "" Feb 19 11:43:29.720: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:43:29.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8hwf2" for this suite. Feb 19 11:43:35.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:43:35.944: INFO: namespace: e2e-tests-kubectl-8hwf2, resource: bindings, ignored listing per whitelist Feb 19 11:43:36.148: INFO: namespace e2e-tests-kubectl-8hwf2 deletion completed in 6.40941368s • [SLOW TEST:7.105 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:43:36.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 19 11:43:36.479: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-tbnbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-tbnbt/configmaps/e2e-watch-test-resource-version,UID:106bcf55-530d-11ea-a994-fa163e34d433,ResourceVersion:22194415,Generation:0,CreationTimestamp:2020-02-19 11:43:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 19 11:43:36.480: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-tbnbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-tbnbt/configmaps/e2e-watch-test-resource-version,UID:106bcf55-530d-11ea-a994-fa163e34d433,ResourceVersion:22194416,Generation:0,CreationTimestamp:2020-02-19 11:43:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:43:36.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-tbnbt" for this suite. Feb 19 11:43:42.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:43:42.726: INFO: namespace: e2e-tests-watch-tbnbt, resource: bindings, ignored listing per whitelist Feb 19 11:43:42.785: INFO: namespace e2e-tests-watch-tbnbt deletion completed in 6.26250631s • [SLOW TEST:6.636 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:43:42.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 19 11:43:42.982: INFO: Waiting up to 5m0s for pod "pod-145e7b73-530d-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-4xj8g" to be "success or failure" Feb 19 11:43:42.987: INFO: Pod "pod-145e7b73-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037361ms Feb 19 11:43:45.769: INFO: Pod "pod-145e7b73-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.786187479s Feb 19 11:43:47.801: INFO: Pod "pod-145e7b73-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.818173161s Feb 19 11:43:49.873: INFO: Pod "pod-145e7b73-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.89089632s Feb 19 11:43:51.931: INFO: Pod "pod-145e7b73-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.948250087s Feb 19 11:43:53.955: INFO: Pod "pod-145e7b73-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.97246514s Feb 19 11:43:55.971: INFO: Pod "pod-145e7b73-530d-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.988762169s STEP: Saw pod success Feb 19 11:43:55.971: INFO: Pod "pod-145e7b73-530d-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:43:55.978: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-145e7b73-530d-11ea-a0a3-0242ac110008 container test-container: STEP: delete the pod Feb 19 11:43:56.109: INFO: Waiting for pod pod-145e7b73-530d-11ea-a0a3-0242ac110008 to disappear Feb 19 11:43:56.120: INFO: Pod pod-145e7b73-530d-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:43:56.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4xj8g" for this suite. Feb 19 11:44:02.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:44:02.239: INFO: namespace: e2e-tests-emptydir-4xj8g, resource: bindings, ignored listing per whitelist Feb 19 11:44:02.353: INFO: namespace e2e-tests-emptydir-4xj8g deletion completed in 6.223646578s • [SLOW TEST:19.567 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:44:02.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-44s9f A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-44s9f;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-44s9f A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-44s9f;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-44s9f.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-44s9f.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-44s9f.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-44s9f.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-44s9f.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-44s9f.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-44s9f.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-44s9f.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-44s9f.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-44s9f.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-44s9f.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-44s9f.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-44s9f.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 196.252.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.252.196_udp@PTR;check="$$(dig +tcp +noall +answer +search 196.252.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.252.196_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-44s9f A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-44s9f;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-44s9f A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-44s9f;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-44s9f.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-44s9f.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-44s9f.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-44s9f.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-44s9f.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-44s9f.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-44s9f.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-44s9f.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-44s9f.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-44s9f.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-44s9f.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-44s9f.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-44s9f.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 196.252.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.252.196_udp@PTR;check="$$(dig +tcp +noall +answer +search 196.252.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.252.196_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 19 11:44:19.134: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008: the server could not find the requested resource (get pods dns-test-202ba89f-530d-11ea-a0a3-0242ac110008) Feb 19 11:44:19.139: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008: the server could not find the requested resource (get pods dns-test-202ba89f-530d-11ea-a0a3-0242ac110008) Feb 19 11:44:19.145: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-44s9f from pod e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008: the server could not find the requested resource (get pods dns-test-202ba89f-530d-11ea-a0a3-0242ac110008) Feb 19 11:44:19.153: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-44s9f from pod e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008: the server could not find the requested resource (get pods dns-test-202ba89f-530d-11ea-a0a3-0242ac110008) Feb 19 11:44:19.157: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-44s9f.svc from pod e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008: the server could not find the requested resource (get pods dns-test-202ba89f-530d-11ea-a0a3-0242ac110008) Feb 19 11:44:19.162: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-44s9f.svc from pod e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008: the server could not find the requested resource (get pods dns-test-202ba89f-530d-11ea-a0a3-0242ac110008) Feb 19 11:44:19.165: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-44s9f.svc from pod e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008: the server could not find the requested resource (get pods dns-test-202ba89f-530d-11ea-a0a3-0242ac110008) Feb 19 11:44:19.170: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-44s9f.svc from pod e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008: the server could not find the requested resource (get pods dns-test-202ba89f-530d-11ea-a0a3-0242ac110008) Feb 19 11:44:19.173: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-44s9f.svc from pod e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008: the server could not find the requested resource (get pods dns-test-202ba89f-530d-11ea-a0a3-0242ac110008) Feb 19 11:44:19.177: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-44s9f.svc from pod e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008: the server could not find the requested resource (get pods dns-test-202ba89f-530d-11ea-a0a3-0242ac110008) Feb 19 11:44:19.181: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008: the server could not find the requested resource (get pods dns-test-202ba89f-530d-11ea-a0a3-0242ac110008) Feb 19 11:44:19.186: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008: the server could not find the requested resource (get pods dns-test-202ba89f-530d-11ea-a0a3-0242ac110008) Feb 19 11:44:19.195: INFO: Lookups using e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-44s9f jessie_tcp@dns-test-service.e2e-tests-dns-44s9f jessie_udp@dns-test-service.e2e-tests-dns-44s9f.svc jessie_tcp@dns-test-service.e2e-tests-dns-44s9f.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-44s9f.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-44s9f.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-44s9f.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-44s9f.svc jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 19 11:44:24.859: INFO: DNS probes using e2e-tests-dns-44s9f/dns-test-202ba89f-530d-11ea-a0a3-0242ac110008 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:44:26.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-44s9f" for this suite. Feb 19 11:44:34.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:44:34.551: INFO: namespace: e2e-tests-dns-44s9f, resource: bindings, ignored listing per whitelist Feb 19 11:44:34.583: INFO: namespace e2e-tests-dns-44s9f deletion completed in 8.277845259s • [SLOW TEST:32.230 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:44:34.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 19 11:44:34.911: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33535e42-530d-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-gp785" to be "success or failure" Feb 19 11:44:34.942: INFO: Pod "downwardapi-volume-33535e42-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 30.962013ms Feb 19 11:44:37.320: INFO: Pod "downwardapi-volume-33535e42-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40956353s Feb 19 11:44:39.342: INFO: Pod "downwardapi-volume-33535e42-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431072913s Feb 19 11:44:41.367: INFO: Pod "downwardapi-volume-33535e42-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.456026379s Feb 19 11:44:43.387: INFO: Pod "downwardapi-volume-33535e42-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.476037045s Feb 19 11:44:45.486: INFO: Pod "downwardapi-volume-33535e42-530d-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.575316145s Feb 19 11:44:48.100: INFO: Pod "downwardapi-volume-33535e42-530d-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.189312541s STEP: Saw pod success Feb 19 11:44:48.101: INFO: Pod "downwardapi-volume-33535e42-530d-11ea-a0a3-0242ac110008" satisfied condition "success or failure" Feb 19 11:44:48.113: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-33535e42-530d-11ea-a0a3-0242ac110008 container client-container: STEP: delete the pod Feb 19 11:44:48.562: INFO: Waiting for pod downwardapi-volume-33535e42-530d-11ea-a0a3-0242ac110008 to disappear Feb 19 11:44:48.581: INFO: Pod downwardapi-volume-33535e42-530d-11ea-a0a3-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 19 11:44:48.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gp785" for this suite. Feb 19 11:44:54.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 19 11:44:54.736: INFO: namespace: e2e-tests-projected-gp785, resource: bindings, ignored listing per whitelist Feb 19 11:44:54.852: INFO: namespace e2e-tests-projected-gp785 deletion completed in 6.249718894s • [SLOW TEST:20.269 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 19 11:44:54.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 19 11:44:55.060: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 15.381989ms)
Feb 19 11:44:55.066: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.190183ms)
Feb 19 11:44:55.070: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.343849ms)
Feb 19 11:44:55.074: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.962319ms)
Feb 19 11:44:55.078: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.923971ms)
Feb 19 11:44:55.082: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.107846ms)
Feb 19 11:44:55.086: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.494071ms)
Feb 19 11:44:55.091: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.157243ms)
Feb 19 11:44:55.095: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.947394ms)
Feb 19 11:44:55.098: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.648199ms)
Feb 19 11:44:55.102: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.59068ms)
Feb 19 11:44:55.106: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.222083ms)
Feb 19 11:44:55.111: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.27283ms)
Feb 19 11:44:55.114: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.203004ms)
Feb 19 11:44:55.118: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.369009ms)
Feb 19 11:44:55.124: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.214717ms)
Feb 19 11:44:55.158: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 33.543321ms)
Feb 19 11:44:55.164: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.0187ms)
Feb 19 11:44:55.172: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.378607ms)
Feb 19 11:44:55.177: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.975786ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 11:44:55.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-fv8sp" for this suite.
Feb 19 11:45:01.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 11:45:01.428: INFO: namespace: e2e-tests-proxy-fv8sp, resource: bindings, ignored listing per whitelist
Feb 19 11:45:01.466: INFO: namespace e2e-tests-proxy-fv8sp deletion completed in 6.284510412s

• [SLOW TEST:6.613 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 11:45:01.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 19 11:45:02.416: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 19 11:45:07.433: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 19 11:45:13.452: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 19 11:45:13.504: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-vk99c,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vk99c/deployments/test-cleanup-deployment,UID:4a500bc5-530d-11ea-a994-fa163e34d433,ResourceVersion:22194658,Generation:1,CreationTimestamp:2020-02-19 11:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb 19 11:45:13.509: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 11:45:13.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-vk99c" for this suite.
Feb 19 11:45:21.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 11:45:22.045: INFO: namespace: e2e-tests-deployment-vk99c, resource: bindings, ignored listing per whitelist
Feb 19 11:45:22.229: INFO: namespace e2e-tests-deployment-vk99c deletion completed in 8.636881947s

• [SLOW TEST:20.763 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 11:45:22.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-r572f
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-r572f
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-r572f
Feb 19 11:45:23.971: INFO: Found 0 stateful pods, waiting for 1
Feb 19 11:45:34.081: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 19 11:45:34.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 19 11:45:34.723: INFO: stderr: "I0219 11:45:34.319779    1115 log.go:172] (0xc0003202c0) (0xc00074c000) Create stream\nI0219 11:45:34.320160    1115 log.go:172] (0xc0003202c0) (0xc00074c000) Stream added, broadcasting: 1\nI0219 11:45:34.325976    1115 log.go:172] (0xc0003202c0) Reply frame received for 1\nI0219 11:45:34.326014    1115 log.go:172] (0xc0003202c0) (0xc0005fac80) Create stream\nI0219 11:45:34.326023    1115 log.go:172] (0xc0003202c0) (0xc0005fac80) Stream added, broadcasting: 3\nI0219 11:45:34.337911    1115 log.go:172] (0xc0003202c0) Reply frame received for 3\nI0219 11:45:34.337998    1115 log.go:172] (0xc0003202c0) (0xc00074c140) Create stream\nI0219 11:45:34.338018    1115 log.go:172] (0xc0003202c0) (0xc00074c140) Stream added, broadcasting: 5\nI0219 11:45:34.340660    1115 log.go:172] (0xc0003202c0) Reply frame received for 5\nI0219 11:45:34.628061    1115 log.go:172] (0xc0003202c0) Data frame received for 3\nI0219 11:45:34.628138    1115 log.go:172] (0xc0005fac80) (3) Data frame handling\nI0219 11:45:34.628156    1115 log.go:172] (0xc0005fac80) (3) Data frame sent\nI0219 11:45:34.715729    1115 log.go:172] (0xc0003202c0) Data frame received for 1\nI0219 11:45:34.716103    1115 log.go:172] (0xc00074c000) (1) Data frame handling\nI0219 11:45:34.716120    1115 log.go:172] (0xc00074c000) (1) Data frame sent\nI0219 11:45:34.716142    1115 log.go:172] (0xc0003202c0) (0xc00074c000) Stream removed, broadcasting: 1\nI0219 11:45:34.716380    1115 log.go:172] (0xc0003202c0) (0xc0005fac80) Stream removed, broadcasting: 3\nI0219 11:45:34.716423    1115 log.go:172] (0xc0003202c0) (0xc00074c140) Stream removed, broadcasting: 5\nI0219 11:45:34.716451    1115 log.go:172] (0xc0003202c0) Go away received\nI0219 11:45:34.716469    1115 log.go:172] (0xc0003202c0) (0xc00074c000) Stream removed, broadcasting: 1\nI0219 11:45:34.716487    1115 log.go:172] (0xc0003202c0) (0xc0005fac80) Stream removed, broadcasting: 3\nI0219 11:45:34.716500    1115 log.go:172] (0xc0003202c0) (0xc00074c140) Stream removed, broadcasting: 5\n"
Feb 19 11:45:34.724: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 19 11:45:34.724: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 19 11:45:34.737: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 19 11:45:44.757: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 19 11:45:44.757: INFO: Waiting for statefulset status.replicas updated to 0
Feb 19 11:45:44.795: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 19 11:45:44.795: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  }]
Feb 19 11:45:44.795: INFO: 
Feb 19 11:45:44.795: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 19 11:45:45.817: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988867871s
Feb 19 11:45:46.896: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.96672141s
Feb 19 11:45:47.916: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.888043997s
Feb 19 11:45:49.000: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.867830327s
Feb 19 11:45:50.035: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.784464533s
Feb 19 11:45:51.073: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.749040375s
Feb 19 11:45:52.465: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.711252974s
Feb 19 11:45:53.542: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.319148124s
Feb 19 11:45:54.614: INFO: Verifying statefulset ss doesn't scale past 3 for another 241.507688ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-r572f
Feb 19 11:45:55.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:45:56.120: INFO: stderr: "I0219 11:45:55.787572    1137 log.go:172] (0xc00074a0b0) (0xc0003a8000) Create stream\nI0219 11:45:55.787655    1137 log.go:172] (0xc00074a0b0) (0xc0003a8000) Stream added, broadcasting: 1\nI0219 11:45:55.803318    1137 log.go:172] (0xc00074a0b0) Reply frame received for 1\nI0219 11:45:55.803427    1137 log.go:172] (0xc00074a0b0) (0xc0003df720) Create stream\nI0219 11:45:55.803439    1137 log.go:172] (0xc00074a0b0) (0xc0003df720) Stream added, broadcasting: 3\nI0219 11:45:55.806156    1137 log.go:172] (0xc00074a0b0) Reply frame received for 3\nI0219 11:45:55.806332    1137 log.go:172] (0xc00074a0b0) (0xc00086f2c0) Create stream\nI0219 11:45:55.806371    1137 log.go:172] (0xc00074a0b0) (0xc00086f2c0) Stream added, broadcasting: 5\nI0219 11:45:55.810215    1137 log.go:172] (0xc00074a0b0) Reply frame received for 5\nI0219 11:45:55.986500    1137 log.go:172] (0xc00074a0b0) Data frame received for 3\nI0219 11:45:55.986613    1137 log.go:172] (0xc0003df720) (3) Data frame handling\nI0219 11:45:55.986651    1137 log.go:172] (0xc0003df720) (3) Data frame sent\nI0219 11:45:56.111961    1137 log.go:172] (0xc00074a0b0) Data frame received for 1\nI0219 11:45:56.112243    1137 log.go:172] (0xc0003a8000) (1) Data frame handling\nI0219 11:45:56.112281    1137 log.go:172] (0xc0003a8000) (1) Data frame sent\nI0219 11:45:56.112697    1137 log.go:172] (0xc00074a0b0) (0xc0003df720) Stream removed, broadcasting: 3\nI0219 11:45:56.112894    1137 log.go:172] (0xc00074a0b0) (0xc0003a8000) Stream removed, broadcasting: 1\nI0219 11:45:56.112997    1137 log.go:172] (0xc00074a0b0) (0xc00086f2c0) Stream removed, broadcasting: 5\nI0219 11:45:56.113049    1137 log.go:172] (0xc00074a0b0) Go away received\nI0219 11:45:56.113126    1137 log.go:172] (0xc00074a0b0) (0xc0003a8000) Stream removed, broadcasting: 1\nI0219 11:45:56.113142    1137 log.go:172] (0xc00074a0b0) (0xc0003df720) Stream removed, broadcasting: 3\nI0219 11:45:56.113149    1137 log.go:172] (0xc00074a0b0) (0xc00086f2c0) Stream removed, broadcasting: 5\n"
Feb 19 11:45:56.121: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 19 11:45:56.121: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 19 11:45:56.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:45:56.782: INFO: stderr: "I0219 11:45:56.246878    1159 log.go:172] (0xc00071c370) (0xc0005d7360) Create stream\nI0219 11:45:56.247017    1159 log.go:172] (0xc00071c370) (0xc0005d7360) Stream added, broadcasting: 1\nI0219 11:45:56.250467    1159 log.go:172] (0xc00071c370) Reply frame received for 1\nI0219 11:45:56.250487    1159 log.go:172] (0xc00071c370) (0xc0005d7400) Create stream\nI0219 11:45:56.250491    1159 log.go:172] (0xc00071c370) (0xc0005d7400) Stream added, broadcasting: 3\nI0219 11:45:56.251177    1159 log.go:172] (0xc00071c370) Reply frame received for 3\nI0219 11:45:56.251197    1159 log.go:172] (0xc00071c370) (0xc0005d74a0) Create stream\nI0219 11:45:56.251207    1159 log.go:172] (0xc00071c370) (0xc0005d74a0) Stream added, broadcasting: 5\nI0219 11:45:56.251970    1159 log.go:172] (0xc00071c370) Reply frame received for 5\nI0219 11:45:56.373529    1159 log.go:172] (0xc00071c370) Data frame received for 3\nI0219 11:45:56.373589    1159 log.go:172] (0xc0005d7400) (3) Data frame handling\nI0219 11:45:56.373620    1159 log.go:172] (0xc0005d7400) (3) Data frame sent\nI0219 11:45:56.375848    1159 log.go:172] (0xc00071c370) Data frame received for 5\nI0219 11:45:56.375925    1159 log.go:172] (0xc0005d74a0) (5) Data frame handling\nI0219 11:45:56.375955    1159 log.go:172] (0xc0005d74a0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0219 11:45:56.773916    1159 log.go:172] (0xc00071c370) (0xc0005d7400) Stream removed, broadcasting: 3\nI0219 11:45:56.774709    1159 log.go:172] (0xc00071c370) Data frame received for 1\nI0219 11:45:56.774737    1159 log.go:172] (0xc0005d7360) (1) Data frame handling\nI0219 11:45:56.774753    1159 log.go:172] (0xc0005d7360) (1) Data frame sent\nI0219 11:45:56.774875    1159 log.go:172] (0xc00071c370) (0xc0005d7360) Stream removed, broadcasting: 1\nI0219 11:45:56.774945    1159 log.go:172] (0xc00071c370) (0xc0005d74a0) Stream removed, broadcasting: 5\nI0219 11:45:56.774995    1159 log.go:172] (0xc00071c370) Go away received\nI0219 11:45:56.775143    1159 log.go:172] (0xc00071c370) (0xc0005d7360) Stream removed, broadcasting: 1\nI0219 11:45:56.775171    1159 log.go:172] (0xc00071c370) (0xc0005d7400) Stream removed, broadcasting: 3\nI0219 11:45:56.775188    1159 log.go:172] (0xc00071c370) (0xc0005d74a0) Stream removed, broadcasting: 5\n"
Feb 19 11:45:56.783: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 19 11:45:56.783: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 19 11:45:56.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:45:57.280: INFO: stderr: "I0219 11:45:56.990497    1181 log.go:172] (0xc000138790) (0xc000706640) Create stream\nI0219 11:45:56.990686    1181 log.go:172] (0xc000138790) (0xc000706640) Stream added, broadcasting: 1\nI0219 11:45:57.006961    1181 log.go:172] (0xc000138790) Reply frame received for 1\nI0219 11:45:57.006998    1181 log.go:172] (0xc000138790) (0xc00065ac80) Create stream\nI0219 11:45:57.007005    1181 log.go:172] (0xc000138790) (0xc00065ac80) Stream added, broadcasting: 3\nI0219 11:45:57.007875    1181 log.go:172] (0xc000138790) Reply frame received for 3\nI0219 11:45:57.007907    1181 log.go:172] (0xc000138790) (0xc0007066e0) Create stream\nI0219 11:45:57.007918    1181 log.go:172] (0xc000138790) (0xc0007066e0) Stream added, broadcasting: 5\nI0219 11:45:57.009158    1181 log.go:172] (0xc000138790) Reply frame received for 5\nI0219 11:45:57.162694    1181 log.go:172] (0xc000138790) Data frame received for 5\nI0219 11:45:57.163121    1181 log.go:172] (0xc0007066e0) (5) Data frame handling\nI0219 11:45:57.163148    1181 log.go:172] (0xc0007066e0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0219 11:45:57.163187    1181 log.go:172] (0xc000138790) Data frame received for 3\nI0219 11:45:57.163196    1181 log.go:172] (0xc00065ac80) (3) Data frame handling\nI0219 11:45:57.163222    1181 log.go:172] (0xc00065ac80) (3) Data frame sent\nI0219 11:45:57.271037    1181 log.go:172] (0xc000138790) (0xc00065ac80) Stream removed, broadcasting: 3\nI0219 11:45:57.271399    1181 log.go:172] (0xc000138790) Data frame received for 1\nI0219 11:45:57.271495    1181 log.go:172] (0xc000138790) (0xc0007066e0) Stream removed, broadcasting: 5\nI0219 11:45:57.271557    1181 log.go:172] (0xc000706640) (1) Data frame handling\nI0219 11:45:57.271578    1181 log.go:172] (0xc000706640) (1) Data frame sent\nI0219 11:45:57.271586    1181 log.go:172] (0xc000138790) (0xc000706640) Stream removed, broadcasting: 1\nI0219 11:45:57.271599    1181 log.go:172] (0xc000138790) Go away received\nI0219 11:45:57.272000    1181 log.go:172] (0xc000138790) (0xc000706640) Stream removed, broadcasting: 1\nI0219 11:45:57.272058    1181 log.go:172] (0xc000138790) (0xc00065ac80) Stream removed, broadcasting: 3\nI0219 11:45:57.272096    1181 log.go:172] (0xc000138790) (0xc0007066e0) Stream removed, broadcasting: 5\n"
Feb 19 11:45:57.280: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 19 11:45:57.280: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 19 11:45:57.411: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 11:45:57.411: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false
Feb 19 11:46:07.438: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 11:46:07.438: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 11:46:07.439: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 19 11:46:07.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 19 11:46:07.997: INFO: stderr: "I0219 11:46:07.650266    1204 log.go:172] (0xc00015c630) (0xc0005bf360) Create stream\nI0219 11:46:07.650452    1204 log.go:172] (0xc00015c630) (0xc0005bf360) Stream added, broadcasting: 1\nI0219 11:46:07.657146    1204 log.go:172] (0xc00015c630) Reply frame received for 1\nI0219 11:46:07.657182    1204 log.go:172] (0xc00015c630) (0xc0002fa000) Create stream\nI0219 11:46:07.657198    1204 log.go:172] (0xc00015c630) (0xc0002fa000) Stream added, broadcasting: 3\nI0219 11:46:07.658311    1204 log.go:172] (0xc00015c630) Reply frame received for 3\nI0219 11:46:07.658334    1204 log.go:172] (0xc00015c630) (0xc0005bf400) Create stream\nI0219 11:46:07.658343    1204 log.go:172] (0xc00015c630) (0xc0005bf400) Stream added, broadcasting: 5\nI0219 11:46:07.659424    1204 log.go:172] (0xc00015c630) Reply frame received for 5\nI0219 11:46:07.796741    1204 log.go:172] (0xc00015c630) Data frame received for 3\nI0219 11:46:07.796820    1204 log.go:172] (0xc0002fa000) (3) Data frame handling\nI0219 11:46:07.796846    1204 log.go:172] (0xc0002fa000) (3) Data frame sent\nI0219 11:46:07.992392    1204 log.go:172] (0xc00015c630) (0xc0002fa000) Stream removed, broadcasting: 3\nI0219 11:46:07.992733    1204 log.go:172] (0xc00015c630) Data frame received for 1\nI0219 11:46:07.992751    1204 log.go:172] (0xc0005bf360) (1) Data frame handling\nI0219 11:46:07.992782    1204 log.go:172] (0xc0005bf360) (1) Data frame sent\nI0219 11:46:07.992798    1204 log.go:172] (0xc00015c630) (0xc0005bf360) Stream removed, broadcasting: 1\nI0219 11:46:07.992881    1204 log.go:172] (0xc00015c630) (0xc0005bf400) Stream removed, broadcasting: 5\nI0219 11:46:07.992964    1204 log.go:172] (0xc00015c630) Go away received\nI0219 11:46:07.993004    1204 log.go:172] (0xc00015c630) (0xc0005bf360) Stream removed, broadcasting: 1\nI0219 11:46:07.993022    1204 log.go:172] (0xc00015c630) (0xc0002fa000) Stream removed, broadcasting: 3\nI0219 11:46:07.993037    1204 log.go:172] (0xc00015c630) (0xc0005bf400) Stream removed, broadcasting: 5\n"
Feb 19 11:46:07.998: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 19 11:46:07.998: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 19 11:46:07.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 19 11:46:08.692: INFO: stderr: "I0219 11:46:08.182716    1225 log.go:172] (0xc0006be2c0) (0xc0006e2640) Create stream\nI0219 11:46:08.182924    1225 log.go:172] (0xc0006be2c0) (0xc0006e2640) Stream added, broadcasting: 1\nI0219 11:46:08.192227    1225 log.go:172] (0xc0006be2c0) Reply frame received for 1\nI0219 11:46:08.192254    1225 log.go:172] (0xc0006be2c0) (0xc00065ab40) Create stream\nI0219 11:46:08.192264    1225 log.go:172] (0xc0006be2c0) (0xc00065ab40) Stream added, broadcasting: 3\nI0219 11:46:08.194028    1225 log.go:172] (0xc0006be2c0) Reply frame received for 3\nI0219 11:46:08.194063    1225 log.go:172] (0xc0006be2c0) (0xc00032c000) Create stream\nI0219 11:46:08.194074    1225 log.go:172] (0xc0006be2c0) (0xc00032c000) Stream added, broadcasting: 5\nI0219 11:46:08.196756    1225 log.go:172] (0xc0006be2c0) Reply frame received for 5\nI0219 11:46:08.341512    1225 log.go:172] (0xc0006be2c0) Data frame received for 3\nI0219 11:46:08.341624    1225 log.go:172] (0xc00065ab40) (3) Data frame handling\nI0219 11:46:08.341640    1225 log.go:172] (0xc00065ab40) (3) Data frame sent\nI0219 11:46:08.684224    1225 log.go:172] (0xc0006be2c0) (0xc00065ab40) Stream removed, broadcasting: 3\nI0219 11:46:08.684348    1225 log.go:172] (0xc0006be2c0) Data frame received for 1\nI0219 11:46:08.684370    1225 log.go:172] (0xc0006e2640) (1) Data frame handling\nI0219 11:46:08.684387    1225 log.go:172] (0xc0006e2640) (1) Data frame sent\nI0219 11:46:08.684396    1225 log.go:172] (0xc0006be2c0) (0xc0006e2640) Stream removed, broadcasting: 1\nI0219 11:46:08.684626    1225 log.go:172] (0xc0006be2c0) (0xc00032c000) Stream removed, broadcasting: 5\nI0219 11:46:08.684668    1225 log.go:172] (0xc0006be2c0) Go away received\nI0219 11:46:08.684732    1225 log.go:172] (0xc0006be2c0) (0xc0006e2640) Stream removed, broadcasting: 1\nI0219 11:46:08.684744    1225 log.go:172] (0xc0006be2c0) (0xc00065ab40) Stream removed, broadcasting: 3\nI0219 11:46:08.684752    1225 log.go:172] (0xc0006be2c0) (0xc00032c000) Stream removed, broadcasting: 5\n"
Feb 19 11:46:08.692: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 19 11:46:08.693: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 19 11:46:08.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 19 11:46:09.413: INFO: stderr: "I0219 11:46:08.993783    1248 log.go:172] (0xc0007a0210) (0xc0005b4780) Create stream\nI0219 11:46:08.993967    1248 log.go:172] (0xc0007a0210) (0xc0005b4780) Stream added, broadcasting: 1\nI0219 11:46:09.000984    1248 log.go:172] (0xc0007a0210) Reply frame received for 1\nI0219 11:46:09.001063    1248 log.go:172] (0xc0007a0210) (0xc0003b2b40) Create stream\nI0219 11:46:09.001084    1248 log.go:172] (0xc0007a0210) (0xc0003b2b40) Stream added, broadcasting: 3\nI0219 11:46:09.002287    1248 log.go:172] (0xc0007a0210) Reply frame received for 3\nI0219 11:46:09.002346    1248 log.go:172] (0xc0007a0210) (0xc0005f0000) Create stream\nI0219 11:46:09.002360    1248 log.go:172] (0xc0007a0210) (0xc0005f0000) Stream added, broadcasting: 5\nI0219 11:46:09.004210    1248 log.go:172] (0xc0007a0210) Reply frame received for 5\nI0219 11:46:09.259854    1248 log.go:172] (0xc0007a0210) Data frame received for 3\nI0219 11:46:09.259898    1248 log.go:172] (0xc0003b2b40) (3) Data frame handling\nI0219 11:46:09.259918    1248 log.go:172] (0xc0003b2b40) (3) Data frame sent\nI0219 11:46:09.403210    1248 log.go:172] (0xc0007a0210) (0xc0003b2b40) Stream removed, broadcasting: 3\nI0219 11:46:09.403568    1248 log.go:172] (0xc0007a0210) Data frame received for 1\nI0219 11:46:09.403612    1248 log.go:172] (0xc0005b4780) (1) Data frame handling\nI0219 11:46:09.403647    1248 log.go:172] (0xc0005b4780) (1) Data frame sent\nI0219 11:46:09.403677    1248 log.go:172] (0xc0007a0210) (0xc0005b4780) Stream removed, broadcasting: 1\nI0219 11:46:09.404120    1248 log.go:172] (0xc0007a0210) (0xc0005f0000) Stream removed, broadcasting: 5\nI0219 11:46:09.404204    1248 log.go:172] (0xc0007a0210) Go away received\nI0219 11:46:09.404300    1248 log.go:172] (0xc0007a0210) (0xc0005b4780) Stream removed, broadcasting: 1\nI0219 11:46:09.404409    1248 log.go:172] (0xc0007a0210) (0xc0003b2b40) Stream removed, broadcasting: 3\nI0219 11:46:09.404424    1248 log.go:172] (0xc0007a0210) (0xc0005f0000) Stream removed, broadcasting: 5\n"
Feb 19 11:46:09.414: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 19 11:46:09.414: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 19 11:46:09.414: INFO: Waiting for statefulset status.replicas updated to 0
Feb 19 11:46:09.433: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 19 11:46:19.957: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 19 11:46:19.957: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 19 11:46:19.957: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 19 11:46:19.987: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 19 11:46:19.987: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  }]
Feb 19 11:46:19.987: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:19.987: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:19.987: INFO: 
Feb 19 11:46:19.987: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 19 11:46:22.139: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 19 11:46:22.140: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  }]
Feb 19 11:46:22.140: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:22.140: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:22.140: INFO: 
Feb 19 11:46:22.140: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 19 11:46:23.156: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 19 11:46:23.156: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  }]
Feb 19 11:46:23.156: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:23.156: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:23.156: INFO: 
Feb 19 11:46:23.156: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 19 11:46:24.754: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 19 11:46:24.754: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  }]
Feb 19 11:46:24.754: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:24.754: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:24.754: INFO: 
Feb 19 11:46:24.754: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 19 11:46:25.790: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 19 11:46:25.790: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  }]
Feb 19 11:46:25.790: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:25.790: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:25.790: INFO: 
Feb 19 11:46:25.790: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 19 11:46:27.721: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 19 11:46:27.721: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  }]
Feb 19 11:46:27.721: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:27.722: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:27.722: INFO: 
Feb 19 11:46:27.722: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 19 11:46:28.743: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 19 11:46:28.743: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  }]
Feb 19 11:46:28.744: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:28.744: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:28.744: INFO: 
Feb 19 11:46:28.744: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 19 11:46:29.766: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 19 11:46:29.766: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:24 +0000 UTC  }]
Feb 19 11:46:29.767: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:29.767: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:46:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 11:45:44 +0000 UTC  }]
Feb 19 11:46:29.767: INFO: 
Feb 19 11:46:29.767: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-r572f
Feb 19 11:46:30.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:46:31.060: INFO: rc: 1
Feb 19 11:46:31.061: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0026b3320 exit status 1   true [0xc0015166d8 0xc0015166f8 0xc001516718] [0xc0015166d8 0xc0015166f8 0xc001516718] [0xc0015166f0 0xc001516710] [0x935700 0x935700] 0xc00156a540 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 19 11:46:41.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:46:41.307: INFO: rc: 1
Feb 19 11:46:41.307: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0026b3440 exit status 1   true [0xc001516720 0xc001516738 0xc001516758] [0xc001516720 0xc001516738 0xc001516758] [0xc001516730 0xc001516750] [0x935700 0x935700] 0xc00156ac60 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 19 11:46:51.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:46:51.451: INFO: rc: 1
Feb 19 11:46:51.452: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000b32a20 exit status 1   true [0xc002032710 0xc002032728 0xc002032740] [0xc002032710 0xc002032728 0xc002032740] [0xc002032720 0xc002032738] [0x935700 0x935700] 0xc0013195c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:47:01.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:47:01.598: INFO: rc: 1
Feb 19 11:47:01.598: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0012608a0 exit status 1   true [0xc0006c68e8 0xc0006c6918 0xc0006c6940] [0xc0006c68e8 0xc0006c6918 0xc0006c6940] [0xc0006c6908 0xc0006c6930] [0x935700 0x935700] 0xc0020d8f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:47:11.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:47:11.741: INFO: rc: 1
Feb 19 11:47:11.742: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0015661b0 exit status 1   true [0xc00000e288 0xc00000eca8 0xc00000ed78] [0xc00000e288 0xc00000eca8 0xc00000ed78] [0xc00000ec88 0xc00000ed70] [0x935700 0x935700] 0xc000f81140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:47:21.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:47:21.887: INFO: rc: 1
Feb 19 11:47:21.888: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00208a180 exit status 1   true [0xc00017c000 0xc000ad4008 0xc000ad4020] [0xc00017c000 0xc000ad4008 0xc000ad4020] [0xc000ad4000 0xc000ad4018] [0x935700 0x935700] 0xc001c385a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:47:31.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:47:31.994: INFO: rc: 1
Feb 19 11:47:31.994: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0022002d0 exit status 1   true [0xc001910000 0xc001910018 0xc001910030] [0xc001910000 0xc001910018 0xc001910030] [0xc001910010 0xc001910028] [0x935700 0x935700] 0xc001b50e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:47:41.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:47:42.114: INFO: rc: 1
Feb 19 11:47:42.114: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0022003f0 exit status 1   true [0xc001910038 0xc001910050 0xc001910068] [0xc001910038 0xc001910050 0xc001910068] [0xc001910048 0xc001910060] [0x935700 0x935700] 0xc001b516e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:47:52.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:47:52.249: INFO: rc: 1
Feb 19 11:47:52.249: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0015664e0 exit status 1   true [0xc00000ed98 0xc00000eea0 0xc00000ef58] [0xc00000ed98 0xc00000eea0 0xc00000ef58] [0xc00000ee48 0xc00000eee8] [0x935700 0x935700] 0xc0009be540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:48:02.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:48:02.361: INFO: rc: 1
Feb 19 11:48:02.362: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001566690 exit status 1   true [0xc00000efc0 0xc00000f0b8 0xc00000f150] [0xc00000efc0 0xc00000f0b8 0xc00000f150] [0xc00000f058 0xc00000f148] [0x935700 0x935700] 0xc0009befc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:48:12.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:48:12.525: INFO: rc: 1
Feb 19 11:48:12.525: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bf4120 exit status 1   true [0xc002032000 0xc002032018 0xc002032030] [0xc002032000 0xc002032018 0xc002032030] [0xc002032010 0xc002032028] [0x935700 0x935700] 0xc001afe780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:48:22.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:48:22.650: INFO: rc: 1
Feb 19 11:48:22.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001566840 exit status 1   true [0xc00000f158 0xc00000f198 0xc00000f200] [0xc00000f158 0xc00000f198 0xc00000f200] [0xc00000f170 0xc00000f1e8] [0x935700 0x935700] 0xc001ad4480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:48:32.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:48:32.778: INFO: rc: 1
Feb 19 11:48:32.778: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001566990 exit status 1   true [0xc00000f208 0xc00000f288 0xc00000f2b0] [0xc00000f208 0xc00000f288 0xc00000f2b0] [0xc00000f278 0xc00000f2a0] [0x935700 0x935700] 0xc001ad50e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:48:42.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:48:42.906: INFO: rc: 1
Feb 19 11:48:42.906: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bf4270 exit status 1   true [0xc002032038 0xc002032050 0xc002032068] [0xc002032038 0xc002032050 0xc002032068] [0xc002032048 0xc002032060] [0x935700 0x935700] 0xc001afefc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:48:52.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:48:53.032: INFO: rc: 1
Feb 19 11:48:53.032: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bf43c0 exit status 1   true [0xc002032070 0xc002032088 0xc0020320a0] [0xc002032070 0xc002032088 0xc0020320a0] [0xc002032080 0xc002032098] [0x935700 0x935700] 0xc001aff860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:49:03.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:49:03.133: INFO: rc: 1
Feb 19 11:49:03.134: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00208a1b0 exit status 1   true [0xc00017c140 0xc002032010 0xc002032028] [0xc00017c140 0xc002032010 0xc002032028] [0xc002032008 0xc002032020] [0x935700 0x935700] 0xc0009be9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:49:13.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:49:13.281: INFO: rc: 1
Feb 19 11:49:13.281: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001566240 exit status 1   true [0xc001910000 0xc001910018 0xc001910030] [0xc001910000 0xc001910018 0xc001910030] [0xc001910010 0xc001910028] [0x935700 0x935700] 0xc000f80240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:49:23.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:49:23.418: INFO: rc: 1
Feb 19 11:49:23.419: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002200300 exit status 1   true [0xc00000e288 0xc00000eca8 0xc00000ed78] [0xc00000e288 0xc00000eca8 0xc00000ed78] [0xc00000ec88 0xc00000ed70] [0x935700 0x935700] 0xc001ad4ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:49:33.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:49:33.572: INFO: rc: 1
Feb 19 11:49:33.572: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002200450 exit status 1   true [0xc00000ed98 0xc00000eea0 0xc00000ef58] [0xc00000ed98 0xc00000eea0 0xc00000ef58] [0xc00000ee48 0xc00000eee8] [0x935700 0x935700] 0xc001ad5440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:49:43.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:49:43.710: INFO: rc: 1
Feb 19 11:49:43.711: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001566540 exit status 1   true [0xc001910038 0xc001910050 0xc001910068] [0xc001910038 0xc001910050 0xc001910068] [0xc001910048 0xc001910060] [0x935700 0x935700] 0xc000f81560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:49:53.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:49:53.874: INFO: rc: 1
Feb 19 11:49:53.875: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bf4150 exit status 1   true [0xc000ad4000 0xc000ad4018 0xc000ad4030] [0xc000ad4000 0xc000ad4018 0xc000ad4030] [0xc000ad4010 0xc000ad4028] [0x935700 0x935700] 0xc001b51440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:50:03.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:50:04.002: INFO: rc: 1
Feb 19 11:50:04.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0022005a0 exit status 1   true [0xc00000efc0 0xc00000f0b8 0xc00000f150] [0xc00000efc0 0xc00000f0b8 0xc00000f150] [0xc00000f058 0xc00000f148] [0x935700 0x935700] 0xc001ad5f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:50:14.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:50:14.181: INFO: rc: 1
Feb 19 11:50:14.182: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002200750 exit status 1   true [0xc00000f158 0xc00000f198 0xc00000f200] [0xc00000f158 0xc00000f198 0xc00000f200] [0xc00000f170 0xc00000f1e8] [0x935700 0x935700] 0xc001afe780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:50:24.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:50:24.273: INFO: rc: 1
Feb 19 11:50:24.273: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002200870 exit status 1   true [0xc00000f208 0xc00000f288 0xc00000f2b0] [0xc00000f208 0xc00000f288 0xc00000f2b0] [0xc00000f278 0xc00000f2a0] [0x935700 0x935700] 0xc001afefc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:50:34.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:50:34.420: INFO: rc: 1
Feb 19 11:50:34.421: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00208a360 exit status 1   true [0xc002032030 0xc002032048 0xc002032060] [0xc002032030 0xc002032048 0xc002032060] [0xc002032040 0xc002032058] [0x935700 0x935700] 0xc0009bfa40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:50:44.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:50:44.551: INFO: rc: 1
Feb 19 11:50:44.551: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001566750 exit status 1   true [0xc001910070 0xc001910088 0xc0019100a0] [0xc001910070 0xc001910088 0xc0019100a0] [0xc001910080 0xc001910098] [0x935700 0x935700] 0xc001c38600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:50:54.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:50:54.670: INFO: rc: 1
Feb 19 11:50:54.671: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bf4330 exit status 1   true [0xc000ad4038 0xc000ad4050 0xc000ad4078] [0xc000ad4038 0xc000ad4050 0xc000ad4078] [0xc000ad4048 0xc000ad4070] [0x935700 0x935700] 0xc001b51980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:51:04.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:51:04.793: INFO: rc: 1
Feb 19 11:51:04.793: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f6c000 exit status 1   true [0xc000454020 0xc000454060 0xc0004540f8] [0xc000454020 0xc000454060 0xc0004540f8] [0xc000454038 0xc0004540a0] [0x935700 0x935700] 0xc0010a44e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:51:14.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:51:14.936: INFO: rc: 1
Feb 19 11:51:14.937: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f6c150 exit status 1   true [0xc00017c000 0xc000ad4008 0xc000ad4020] [0xc00017c000 0xc000ad4008 0xc000ad4020] [0xc000ad4000 0xc000ad4018] [0x935700 0x935700] 0xc000f81140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:51:24.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:51:25.052: INFO: rc: 1
Feb 19 11:51:25.053: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00133c210 exit status 1   true [0xc000454100 0xc000454128 0xc000454180] [0xc000454100 0xc000454128 0xc000454180] [0xc000454118 0xc000454158] [0x935700 0x935700] 0xc0009be540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 19 11:51:35.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-r572f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 11:51:35.169: INFO: rc: 1
Feb 19 11:51:35.170: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb 19 11:51:35.170: INFO: Scaling statefulset ss to 0
Feb 19 11:51:35.191: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 19 11:51:35.194: INFO: Deleting all statefulset in ns e2e-tests-statefulset-r572f
Feb 19 11:51:35.198: INFO: Scaling statefulset ss to 0
Feb 19 11:51:35.209: INFO: Waiting for statefulset status.replicas updated to 0
Feb 19 11:51:35.212: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 11:51:35.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-r572f" for this suite.
Feb 19 11:51:41.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 11:51:41.466: INFO: namespace: e2e-tests-statefulset-r572f, resource: bindings, ignored listing per whitelist
Feb 19 11:51:41.557: INFO: namespace e2e-tests-statefulset-r572f deletion completed in 6.314437372s

• [SLOW TEST:379.326 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 11:51:41.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 11:51:54.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-5vd2z" for this suite.
Feb 19 11:52:00.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 11:52:00.696: INFO: namespace: e2e-tests-emptydir-wrapper-5vd2z, resource: bindings, ignored listing per whitelist
Feb 19 11:52:00.847: INFO: namespace e2e-tests-emptydir-wrapper-5vd2z deletion completed in 6.281523785s

• [SLOW TEST:19.290 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 11:52:00.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-3d386cb0-530e-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 19 11:52:01.087: INFO: Waiting up to 5m0s for pod "pod-configmaps-3d38fbb5-530e-11ea-a0a3-0242ac110008" in namespace "e2e-tests-configmap-q8vpm" to be "success or failure"
Feb 19 11:52:01.114: INFO: Pod "pod-configmaps-3d38fbb5-530e-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 26.525171ms
Feb 19 11:52:03.177: INFO: Pod "pod-configmaps-3d38fbb5-530e-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090067598s
Feb 19 11:52:05.197: INFO: Pod "pod-configmaps-3d38fbb5-530e-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109354119s
Feb 19 11:52:07.222: INFO: Pod "pod-configmaps-3d38fbb5-530e-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134410219s
Feb 19 11:52:09.257: INFO: Pod "pod-configmaps-3d38fbb5-530e-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16974351s
Feb 19 11:52:11.379: INFO: Pod "pod-configmaps-3d38fbb5-530e-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.29184656s
STEP: Saw pod success
Feb 19 11:52:11.379: INFO: Pod "pod-configmaps-3d38fbb5-530e-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 11:52:11.402: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3d38fbb5-530e-11ea-a0a3-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 19 11:52:12.918: INFO: Waiting for pod pod-configmaps-3d38fbb5-530e-11ea-a0a3-0242ac110008 to disappear
Feb 19 11:52:12.967: INFO: Pod pod-configmaps-3d38fbb5-530e-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 11:52:12.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-q8vpm" for this suite.
Feb 19 11:52:19.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 11:52:19.158: INFO: namespace: e2e-tests-configmap-q8vpm, resource: bindings, ignored listing per whitelist
Feb 19 11:52:19.184: INFO: namespace e2e-tests-configmap-q8vpm deletion completed in 6.202811081s

• [SLOW TEST:18.337 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 11:52:19.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb 19 11:52:29.518: INFO: Pod pod-hostip-4837a0f2-530e-11ea-a0a3-0242ac110008 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 11:52:29.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7g896" for this suite.
Feb 19 11:52:53.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 11:52:53.724: INFO: namespace: e2e-tests-pods-7g896, resource: bindings, ignored listing per whitelist
Feb 19 11:52:53.799: INFO: namespace e2e-tests-pods-7g896 deletion completed in 24.271197154s

• [SLOW TEST:34.615 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 11:52:53.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-5ce2b1f8-530e-11ea-a0a3-0242ac110008
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-5ce2b1f8-530e-11ea-a0a3-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 11:54:08.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-f29tk" for this suite.
Feb 19 11:54:32.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 11:54:32.303: INFO: namespace: e2e-tests-configmap-f29tk, resource: bindings, ignored listing per whitelist
Feb 19 11:54:32.416: INFO: namespace e2e-tests-configmap-f29tk deletion completed in 24.277932514s

• [SLOW TEST:98.617 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 11:54:32.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-979f9645-530e-11ea-a0a3-0242ac110008
STEP: Creating secret with name s-test-opt-upd-979f96f1-530e-11ea-a0a3-0242ac110008
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-979f9645-530e-11ea-a0a3-0242ac110008
STEP: Updating secret s-test-opt-upd-979f96f1-530e-11ea-a0a3-0242ac110008
STEP: Creating secret with name s-test-opt-create-979f974f-530e-11ea-a0a3-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 11:54:51.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-77nbn" for this suite.
Feb 19 11:55:15.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 11:55:15.494: INFO: namespace: e2e-tests-secrets-77nbn, resource: bindings, ignored listing per whitelist
Feb 19 11:55:15.631: INFO: namespace e2e-tests-secrets-77nbn deletion completed in 24.34691304s

• [SLOW TEST:43.213 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 11:55:15.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 19 11:55:15.913: INFO: Waiting up to 5m0s for pod "downward-api-b15b6e5c-530e-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-nggqs" to be "success or failure"
Feb 19 11:55:15.982: INFO: Pod "downward-api-b15b6e5c-530e-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 68.322151ms
Feb 19 11:55:18.488: INFO: Pod "downward-api-b15b6e5c-530e-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.574374045s
Feb 19 11:55:20.538: INFO: Pod "downward-api-b15b6e5c-530e-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.625062422s
Feb 19 11:55:23.169: INFO: Pod "downward-api-b15b6e5c-530e-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.255390586s
Feb 19 11:55:25.216: INFO: Pod "downward-api-b15b6e5c-530e-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.302836627s
Feb 19 11:55:27.243: INFO: Pod "downward-api-b15b6e5c-530e-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.329784301s
STEP: Saw pod success
Feb 19 11:55:27.243: INFO: Pod "downward-api-b15b6e5c-530e-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 11:55:27.256: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-b15b6e5c-530e-11ea-a0a3-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 19 11:55:27.398: INFO: Waiting for pod downward-api-b15b6e5c-530e-11ea-a0a3-0242ac110008 to disappear
Feb 19 11:55:27.407: INFO: Pod downward-api-b15b6e5c-530e-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 11:55:27.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nggqs" for this suite.
Feb 19 11:55:33.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 11:55:33.482: INFO: namespace: e2e-tests-downward-api-nggqs, resource: bindings, ignored listing per whitelist
Feb 19 11:55:33.653: INFO: namespace e2e-tests-downward-api-nggqs deletion completed in 6.23786489s

• [SLOW TEST:18.022 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 11:55:33.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-8btk
STEP: Creating a pod to test atomic-volume-subpath
Feb 19 11:55:34.078: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8btk" in namespace "e2e-tests-subpath-8dlnh" to be "success or failure"
Feb 19 11:55:34.153: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Pending", Reason="", readiness=false. Elapsed: 75.084035ms
Feb 19 11:55:36.165: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086313476s
Feb 19 11:55:38.182: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104115784s
Feb 19 11:55:40.684: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.605410072s
Feb 19 11:55:42.713: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.634285787s
Feb 19 11:55:44.736: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.657598883s
Feb 19 11:55:46.771: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.693122307s
Feb 19 11:55:48.935: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.856255298s
Feb 19 11:55:50.959: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Pending", Reason="", readiness=false. Elapsed: 16.880250731s
Feb 19 11:55:52.972: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Pending", Reason="", readiness=false. Elapsed: 18.893628203s
Feb 19 11:55:54.988: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Running", Reason="", readiness=false. Elapsed: 20.910159197s
Feb 19 11:55:57.005: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Running", Reason="", readiness=false. Elapsed: 22.926998069s
Feb 19 11:55:59.020: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Running", Reason="", readiness=false. Elapsed: 24.941550884s
Feb 19 11:56:01.033: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Running", Reason="", readiness=false. Elapsed: 26.954896542s
Feb 19 11:56:03.052: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Running", Reason="", readiness=false. Elapsed: 28.974036377s
Feb 19 11:56:05.068: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Running", Reason="", readiness=false. Elapsed: 30.990179061s
Feb 19 11:56:07.082: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Running", Reason="", readiness=false. Elapsed: 33.003476367s
Feb 19 11:56:09.173: INFO: Pod "pod-subpath-test-configmap-8btk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.095111415s
STEP: Saw pod success
Feb 19 11:56:09.173: INFO: Pod "pod-subpath-test-configmap-8btk" satisfied condition "success or failure"
Feb 19 11:56:09.180: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-8btk container test-container-subpath-configmap-8btk: 
STEP: delete the pod
Feb 19 11:56:09.255: INFO: Waiting for pod pod-subpath-test-configmap-8btk to disappear
Feb 19 11:56:09.261: INFO: Pod pod-subpath-test-configmap-8btk no longer exists
STEP: Deleting pod pod-subpath-test-configmap-8btk
Feb 19 11:56:09.261: INFO: Deleting pod "pod-subpath-test-configmap-8btk" in namespace "e2e-tests-subpath-8dlnh"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 11:56:09.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-8dlnh" for this suite.
Feb 19 11:56:17.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 11:56:17.562: INFO: namespace: e2e-tests-subpath-8dlnh, resource: bindings, ignored listing per whitelist
Feb 19 11:56:17.633: INFO: namespace e2e-tests-subpath-8dlnh deletion completed in 8.195379536s

• [SLOW TEST:43.980 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 11:56:17.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 19 11:56:18.826: INFO: Pod name wrapped-volume-race-d6ddc120-530e-11ea-a0a3-0242ac110008: Found 0 pods out of 5
Feb 19 11:56:23.871: INFO: Pod name wrapped-volume-race-d6ddc120-530e-11ea-a0a3-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d6ddc120-530e-11ea-a0a3-0242ac110008 in namespace e2e-tests-emptydir-wrapper-xcqk8, will wait for the garbage collector to delete the pods
Feb 19 11:58:08.082: INFO: Deleting ReplicationController wrapped-volume-race-d6ddc120-530e-11ea-a0a3-0242ac110008 took: 28.504931ms
Feb 19 11:58:08.583: INFO: Terminating ReplicationController wrapped-volume-race-d6ddc120-530e-11ea-a0a3-0242ac110008 pods took: 501.009598ms
STEP: Creating RC which spawns configmap-volume pods
Feb 19 11:58:53.176: INFO: Pod name wrapped-volume-race-32d7caca-530f-11ea-a0a3-0242ac110008: Found 0 pods out of 5
Feb 19 11:58:58.199: INFO: Pod name wrapped-volume-race-32d7caca-530f-11ea-a0a3-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-32d7caca-530f-11ea-a0a3-0242ac110008 in namespace e2e-tests-emptydir-wrapper-xcqk8, will wait for the garbage collector to delete the pods
Feb 19 12:01:02.689: INFO: Deleting ReplicationController wrapped-volume-race-32d7caca-530f-11ea-a0a3-0242ac110008 took: 47.924203ms
Feb 19 12:01:03.091: INFO: Terminating ReplicationController wrapped-volume-race-32d7caca-530f-11ea-a0a3-0242ac110008 pods took: 402.396605ms
STEP: Creating RC which spawns configmap-volume pods
Feb 19 12:01:53.809: INFO: Pod name wrapped-volume-race-9e7e9326-530f-11ea-a0a3-0242ac110008: Found 0 pods out of 5
Feb 19 12:01:58.841: INFO: Pod name wrapped-volume-race-9e7e9326-530f-11ea-a0a3-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-9e7e9326-530f-11ea-a0a3-0242ac110008 in namespace e2e-tests-emptydir-wrapper-xcqk8, will wait for the garbage collector to delete the pods
Feb 19 12:04:04.976: INFO: Deleting ReplicationController wrapped-volume-race-9e7e9326-530f-11ea-a0a3-0242ac110008 took: 16.627396ms
Feb 19 12:04:05.377: INFO: Terminating ReplicationController wrapped-volume-race-9e7e9326-530f-11ea-a0a3-0242ac110008 pods took: 400.7832ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:04:58.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-xcqk8" for this suite.
Feb 19 12:05:08.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:05:08.661: INFO: namespace: e2e-tests-emptydir-wrapper-xcqk8, resource: bindings, ignored listing per whitelist
Feb 19 12:05:08.765: INFO: namespace e2e-tests-emptydir-wrapper-xcqk8 deletion completed in 10.228236391s

• [SLOW TEST:531.132 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:05:08.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 19 12:05:37.209: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-76477 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 19 12:05:37.209: INFO: >>> kubeConfig: /root/.kube/config
I0219 12:05:37.314061       8 log.go:172] (0xc00071d6b0) (0xc001bde5a0) Create stream
I0219 12:05:37.314308       8 log.go:172] (0xc00071d6b0) (0xc001bde5a0) Stream added, broadcasting: 1
I0219 12:05:37.336674       8 log.go:172] (0xc00071d6b0) Reply frame received for 1
I0219 12:05:37.336832       8 log.go:172] (0xc00071d6b0) (0xc000194820) Create stream
I0219 12:05:37.336878       8 log.go:172] (0xc00071d6b0) (0xc000194820) Stream added, broadcasting: 3
I0219 12:05:37.338639       8 log.go:172] (0xc00071d6b0) Reply frame received for 3
I0219 12:05:37.338676       8 log.go:172] (0xc00071d6b0) (0xc0005940a0) Create stream
I0219 12:05:37.338694       8 log.go:172] (0xc00071d6b0) (0xc0005940a0) Stream added, broadcasting: 5
I0219 12:05:37.339900       8 log.go:172] (0xc00071d6b0) Reply frame received for 5
I0219 12:05:37.501838       8 log.go:172] (0xc00071d6b0) Data frame received for 3
I0219 12:05:37.501894       8 log.go:172] (0xc000194820) (3) Data frame handling
I0219 12:05:37.501918       8 log.go:172] (0xc000194820) (3) Data frame sent
I0219 12:05:37.655928       8 log.go:172] (0xc00071d6b0) (0xc000194820) Stream removed, broadcasting: 3
I0219 12:05:37.656292       8 log.go:172] (0xc00071d6b0) Data frame received for 1
I0219 12:05:37.656802       8 log.go:172] (0xc00071d6b0) (0xc0005940a0) Stream removed, broadcasting: 5
I0219 12:05:37.657012       8 log.go:172] (0xc001bde5a0) (1) Data frame handling
I0219 12:05:37.657052       8 log.go:172] (0xc001bde5a0) (1) Data frame sent
I0219 12:05:37.657076       8 log.go:172] (0xc00071d6b0) (0xc001bde5a0) Stream removed, broadcasting: 1
I0219 12:05:37.657096       8 log.go:172] (0xc00071d6b0) Go away received
I0219 12:05:37.658055       8 log.go:172] (0xc00071d6b0) (0xc001bde5a0) Stream removed, broadcasting: 1
I0219 12:05:37.658120       8 log.go:172] (0xc00071d6b0) (0xc000194820) Stream removed, broadcasting: 3
I0219 12:05:37.658129       8 log.go:172] (0xc00071d6b0) (0xc0005940a0) Stream removed, broadcasting: 5
Feb 19 12:05:37.658: INFO: Exec stderr: ""
Feb 19 12:05:37.658: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-76477 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 19 12:05:37.658: INFO: >>> kubeConfig: /root/.kube/config
I0219 12:05:37.829824       8 log.go:172] (0xc00071dce0) (0xc001bde960) Create stream
I0219 12:05:37.829922       8 log.go:172] (0xc00071dce0) (0xc001bde960) Stream added, broadcasting: 1
I0219 12:05:37.837804       8 log.go:172] (0xc00071dce0) Reply frame received for 1
I0219 12:05:37.837861       8 log.go:172] (0xc00071dce0) (0xc00132a320) Create stream
I0219 12:05:37.837874       8 log.go:172] (0xc00071dce0) (0xc00132a320) Stream added, broadcasting: 3
I0219 12:05:37.838751       8 log.go:172] (0xc00071dce0) Reply frame received for 3
I0219 12:05:37.838776       8 log.go:172] (0xc00071dce0) (0xc00132a3c0) Create stream
I0219 12:05:37.838782       8 log.go:172] (0xc00071dce0) (0xc00132a3c0) Stream added, broadcasting: 5
I0219 12:05:37.839614       8 log.go:172] (0xc00071dce0) Reply frame received for 5
I0219 12:05:38.117669       8 log.go:172] (0xc00071dce0) Data frame received for 3
I0219 12:05:38.117789       8 log.go:172] (0xc00132a320) (3) Data frame handling
I0219 12:05:38.117913       8 log.go:172] (0xc00132a320) (3) Data frame sent
I0219 12:05:38.276731       8 log.go:172] (0xc00071dce0) (0xc00132a320) Stream removed, broadcasting: 3
I0219 12:05:38.276862       8 log.go:172] (0xc00071dce0) Data frame received for 1
I0219 12:05:38.276889       8 log.go:172] (0xc001bde960) (1) Data frame handling
I0219 12:05:38.276925       8 log.go:172] (0xc001bde960) (1) Data frame sent
I0219 12:05:38.276950       8 log.go:172] (0xc00071dce0) (0xc00132a3c0) Stream removed, broadcasting: 5
I0219 12:05:38.277023       8 log.go:172] (0xc00071dce0) (0xc001bde960) Stream removed, broadcasting: 1
I0219 12:05:38.277035       8 log.go:172] (0xc00071dce0) Go away received
I0219 12:05:38.277285       8 log.go:172] (0xc00071dce0) (0xc001bde960) Stream removed, broadcasting: 1
I0219 12:05:38.277304       8 log.go:172] (0xc00071dce0) (0xc00132a320) Stream removed, broadcasting: 3
I0219 12:05:38.277318       8 log.go:172] (0xc00071dce0) (0xc00132a3c0) Stream removed, broadcasting: 5
Feb 19 12:05:38.277: INFO: Exec stderr: ""
Feb 19 12:05:38.277: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-76477 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 19 12:05:38.277: INFO: >>> kubeConfig: /root/.kube/config
I0219 12:05:38.374601       8 log.go:172] (0xc0000ebd90) (0xc001bdec80) Create stream
I0219 12:05:38.374965       8 log.go:172] (0xc0000ebd90) (0xc001bdec80) Stream added, broadcasting: 1
I0219 12:05:38.381686       8 log.go:172] (0xc0000ebd90) Reply frame received for 1
I0219 12:05:38.381740       8 log.go:172] (0xc0000ebd90) (0xc00132a460) Create stream
I0219 12:05:38.381757       8 log.go:172] (0xc0000ebd90) (0xc00132a460) Stream added, broadcasting: 3
I0219 12:05:38.382908       8 log.go:172] (0xc0000ebd90) Reply frame received for 3
I0219 12:05:38.382955       8 log.go:172] (0xc0000ebd90) (0xc000195400) Create stream
I0219 12:05:38.382971       8 log.go:172] (0xc0000ebd90) (0xc000195400) Stream added, broadcasting: 5
I0219 12:05:38.383918       8 log.go:172] (0xc0000ebd90) Reply frame received for 5
I0219 12:05:38.551527       8 log.go:172] (0xc0000ebd90) Data frame received for 3
I0219 12:05:38.551661       8 log.go:172] (0xc00132a460) (3) Data frame handling
I0219 12:05:38.551695       8 log.go:172] (0xc00132a460) (3) Data frame sent
I0219 12:05:38.693332       8 log.go:172] (0xc0000ebd90) Data frame received for 1
I0219 12:05:38.693401       8 log.go:172] (0xc001bdec80) (1) Data frame handling
I0219 12:05:38.693441       8 log.go:172] (0xc001bdec80) (1) Data frame sent
I0219 12:05:38.693459       8 log.go:172] (0xc0000ebd90) (0xc001bdec80) Stream removed, broadcasting: 1
I0219 12:05:38.693741       8 log.go:172] (0xc0000ebd90) (0xc00132a460) Stream removed, broadcasting: 3
I0219 12:05:38.694224       8 log.go:172] (0xc0000ebd90) (0xc000195400) Stream removed, broadcasting: 5
I0219 12:05:38.694425       8 log.go:172] (0xc0000ebd90) (0xc001bdec80) Stream removed, broadcasting: 1
I0219 12:05:38.694501       8 log.go:172] (0xc0000ebd90) (0xc00132a460) Stream removed, broadcasting: 3
I0219 12:05:38.694585       8 log.go:172] (0xc0000ebd90) (0xc000195400) Stream removed, broadcasting: 5
Feb 19 12:05:38.695: INFO: Exec stderr: ""
I0219 12:05:38.695608       8 log.go:172] (0xc0000ebd90) Go away received
Feb 19 12:05:38.695: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-76477 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 19 12:05:38.695: INFO: >>> kubeConfig: /root/.kube/config
I0219 12:05:38.775092       8 log.go:172] (0xc000b1e2c0) (0xc000594640) Create stream
I0219 12:05:38.775224       8 log.go:172] (0xc000b1e2c0) (0xc000594640) Stream added, broadcasting: 1
I0219 12:05:38.788206       8 log.go:172] (0xc000b1e2c0) Reply frame received for 1
I0219 12:05:38.788285       8 log.go:172] (0xc000b1e2c0) (0xc000594780) Create stream
I0219 12:05:38.788299       8 log.go:172] (0xc000b1e2c0) (0xc000594780) Stream added, broadcasting: 3
I0219 12:05:38.789350       8 log.go:172] (0xc000b1e2c0) Reply frame received for 3
I0219 12:05:38.789386       8 log.go:172] (0xc000b1e2c0) (0xc000371f40) Create stream
I0219 12:05:38.789403       8 log.go:172] (0xc000b1e2c0) (0xc000371f40) Stream added, broadcasting: 5
I0219 12:05:38.790481       8 log.go:172] (0xc000b1e2c0) Reply frame received for 5
I0219 12:05:38.925948       8 log.go:172] (0xc000b1e2c0) Data frame received for 3
I0219 12:05:38.926045       8 log.go:172] (0xc000594780) (3) Data frame handling
I0219 12:05:38.926077       8 log.go:172] (0xc000594780) (3) Data frame sent
I0219 12:05:39.040976       8 log.go:172] (0xc000b1e2c0) Data frame received for 1
I0219 12:05:39.041081       8 log.go:172] (0xc000b1e2c0) (0xc000594780) Stream removed, broadcasting: 3
I0219 12:05:39.041181       8 log.go:172] (0xc000594640) (1) Data frame handling
I0219 12:05:39.041218       8 log.go:172] (0xc000594640) (1) Data frame sent
I0219 12:05:39.041235       8 log.go:172] (0xc000b1e2c0) (0xc000594640) Stream removed, broadcasting: 1
I0219 12:05:39.041412       8 log.go:172] (0xc000b1e2c0) (0xc000371f40) Stream removed, broadcasting: 5
I0219 12:05:39.041517       8 log.go:172] (0xc000b1e2c0) (0xc000594640) Stream removed, broadcasting: 1
I0219 12:05:39.041543       8 log.go:172] (0xc000b1e2c0) (0xc000594780) Stream removed, broadcasting: 3
I0219 12:05:39.041569       8 log.go:172] (0xc000b1e2c0) (0xc000371f40) Stream removed, broadcasting: 5
Feb 19 12:05:39.042: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 19 12:05:39.042: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-76477 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 19 12:05:39.042: INFO: >>> kubeConfig: /root/.kube/config
I0219 12:05:39.115794       8 log.go:172] (0xc000b1e790) (0xc000594dc0) Create stream
I0219 12:05:39.115948       8 log.go:172] (0xc000b1e790) (0xc000594dc0) Stream added, broadcasting: 1
I0219 12:05:39.122000       8 log.go:172] (0xc000b1e790) Reply frame received for 1
I0219 12:05:39.122179       8 log.go:172] (0xc000b1e790) (0xc001bdedc0) Create stream
I0219 12:05:39.122203       8 log.go:172] (0xc000b1e790) (0xc001bdedc0) Stream added, broadcasting: 3
I0219 12:05:39.123430       8 log.go:172] (0xc000b1e790) Reply frame received for 3
I0219 12:05:39.123451       8 log.go:172] (0xc000b1e790) (0xc001bdee60) Create stream
I0219 12:05:39.123460       8 log.go:172] (0xc000b1e790) (0xc001bdee60) Stream added, broadcasting: 5
I0219 12:05:39.124192       8 log.go:172] (0xc000b1e790) Reply frame received for 5
I0219 12:05:39.232825       8 log.go:172] (0xc000b1e790) Data frame received for 3
I0219 12:05:39.232970       8 log.go:172] (0xc001bdedc0) (3) Data frame handling
I0219 12:05:39.233000       8 log.go:172] (0xc001bdedc0) (3) Data frame sent
I0219 12:05:39.348853       8 log.go:172] (0xc000b1e790) Data frame received for 1
I0219 12:05:39.349010       8 log.go:172] (0xc000594dc0) (1) Data frame handling
I0219 12:05:39.349060       8 log.go:172] (0xc000594dc0) (1) Data frame sent
I0219 12:05:39.349092       8 log.go:172] (0xc000b1e790) (0xc000594dc0) Stream removed, broadcasting: 1
I0219 12:05:39.349370       8 log.go:172] (0xc000b1e790) (0xc001bdee60) Stream removed, broadcasting: 5
I0219 12:05:39.349430       8 log.go:172] (0xc000b1e790) (0xc001bdedc0) Stream removed, broadcasting: 3
I0219 12:05:39.349456       8 log.go:172] (0xc000b1e790) Go away received
I0219 12:05:39.349718       8 log.go:172] (0xc000b1e790) (0xc000594dc0) Stream removed, broadcasting: 1
I0219 12:05:39.349734       8 log.go:172] (0xc000b1e790) (0xc001bdedc0) Stream removed, broadcasting: 3
I0219 12:05:39.349746       8 log.go:172] (0xc000b1e790) (0xc001bdee60) Stream removed, broadcasting: 5
Feb 19 12:05:39.349: INFO: Exec stderr: ""
Feb 19 12:05:39.349: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-76477 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 19 12:05:39.349: INFO: >>> kubeConfig: /root/.kube/config
I0219 12:05:39.410226       8 log.go:172] (0xc0022662c0) (0xc00132a8c0) Create stream
I0219 12:05:39.410532       8 log.go:172] (0xc0022662c0) (0xc00132a8c0) Stream added, broadcasting: 1
I0219 12:05:39.416786       8 log.go:172] (0xc0022662c0) Reply frame received for 1
I0219 12:05:39.416834       8 log.go:172] (0xc0022662c0) (0xc0008c2460) Create stream
I0219 12:05:39.416853       8 log.go:172] (0xc0022662c0) (0xc0008c2460) Stream added, broadcasting: 3
I0219 12:05:39.418974       8 log.go:172] (0xc0022662c0) Reply frame received for 3
I0219 12:05:39.418998       8 log.go:172] (0xc0022662c0) (0xc001bdef00) Create stream
I0219 12:05:39.419008       8 log.go:172] (0xc0022662c0) (0xc001bdef00) Stream added, broadcasting: 5
I0219 12:05:39.420373       8 log.go:172] (0xc0022662c0) Reply frame received for 5
I0219 12:05:39.543665       8 log.go:172] (0xc0022662c0) Data frame received for 3
I0219 12:05:39.543729       8 log.go:172] (0xc0008c2460) (3) Data frame handling
I0219 12:05:39.543755       8 log.go:172] (0xc0008c2460) (3) Data frame sent
I0219 12:05:39.658571       8 log.go:172] (0xc0022662c0) Data frame received for 1
I0219 12:05:39.658633       8 log.go:172] (0xc00132a8c0) (1) Data frame handling
I0219 12:05:39.658655       8 log.go:172] (0xc00132a8c0) (1) Data frame sent
I0219 12:05:39.659826       8 log.go:172] (0xc0022662c0) (0xc00132a8c0) Stream removed, broadcasting: 1
I0219 12:05:39.660791       8 log.go:172] (0xc0022662c0) (0xc0008c2460) Stream removed, broadcasting: 3
I0219 12:05:39.661859       8 log.go:172] (0xc0022662c0) (0xc001bdef00) Stream removed, broadcasting: 5
I0219 12:05:39.661907       8 log.go:172] (0xc0022662c0) (0xc00132a8c0) Stream removed, broadcasting: 1
I0219 12:05:39.661916       8 log.go:172] (0xc0022662c0) (0xc0008c2460) Stream removed, broadcasting: 3
I0219 12:05:39.661980       8 log.go:172] (0xc0022662c0) (0xc001bdef00) Stream removed, broadcasting: 5
Feb 19 12:05:39.662: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 19 12:05:39.662: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-76477 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 19 12:05:39.662: INFO: >>> kubeConfig: /root/.kube/config
I0219 12:05:39.752289       8 log.go:172] (0xc002266790) (0xc00132ab40) Create stream
I0219 12:05:39.752370       8 log.go:172] (0xc002266790) (0xc00132ab40) Stream added, broadcasting: 1
I0219 12:05:39.768660       8 log.go:172] (0xc002266790) Reply frame received for 1
I0219 12:05:39.768740       8 log.go:172] (0xc002266790) (0xc0008c26e0) Create stream
I0219 12:05:39.768753       8 log.go:172] (0xc002266790) (0xc0008c26e0) Stream added, broadcasting: 3
I0219 12:05:39.770848       8 log.go:172] (0xc002266790) Reply frame received for 3
I0219 12:05:39.770888       8 log.go:172] (0xc002266790) (0xc000cea000) Create stream
I0219 12:05:39.770915       8 log.go:172] (0xc002266790) (0xc000cea000) Stream added, broadcasting: 5
I0219 12:05:39.772828       8 log.go:172] (0xc002266790) Reply frame received for 5
I0219 12:05:39.894621       8 log.go:172] (0xc002266790) Data frame received for 3
I0219 12:05:39.894700       8 log.go:172] (0xc0008c26e0) (3) Data frame handling
I0219 12:05:39.894720       8 log.go:172] (0xc0008c26e0) (3) Data frame sent
I0219 12:05:40.002736       8 log.go:172] (0xc002266790) Data frame received for 1
I0219 12:05:40.002898       8 log.go:172] (0xc002266790) (0xc000cea000) Stream removed, broadcasting: 5
I0219 12:05:40.002974       8 log.go:172] (0xc00132ab40) (1) Data frame handling
I0219 12:05:40.003016       8 log.go:172] (0xc00132ab40) (1) Data frame sent
I0219 12:05:40.003064       8 log.go:172] (0xc002266790) (0xc0008c26e0) Stream removed, broadcasting: 3
I0219 12:05:40.003168       8 log.go:172] (0xc002266790) (0xc00132ab40) Stream removed, broadcasting: 1
I0219 12:05:40.003238       8 log.go:172] (0xc002266790) Go away received
I0219 12:05:40.003684       8 log.go:172] (0xc002266790) (0xc00132ab40) Stream removed, broadcasting: 1
I0219 12:05:40.003724       8 log.go:172] (0xc002266790) (0xc0008c26e0) Stream removed, broadcasting: 3
I0219 12:05:40.003748       8 log.go:172] (0xc002266790) (0xc000cea000) Stream removed, broadcasting: 5
Feb 19 12:05:40.003: INFO: Exec stderr: ""
Feb 19 12:05:40.003: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-76477 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 19 12:05:40.004: INFO: >>> kubeConfig: /root/.kube/config
I0219 12:05:40.062850       8 log.go:172] (0xc00260a2c0) (0xc001bdf180) Create stream
I0219 12:05:40.062930       8 log.go:172] (0xc00260a2c0) (0xc001bdf180) Stream added, broadcasting: 1
I0219 12:05:40.068253       8 log.go:172] (0xc00260a2c0) Reply frame received for 1
I0219 12:05:40.068363       8 log.go:172] (0xc00260a2c0) (0xc000595400) Create stream
I0219 12:05:40.068375       8 log.go:172] (0xc00260a2c0) (0xc000595400) Stream added, broadcasting: 3
I0219 12:05:40.069635       8 log.go:172] (0xc00260a2c0) Reply frame received for 3
I0219 12:05:40.069682       8 log.go:172] (0xc00260a2c0) (0xc001bdf220) Create stream
I0219 12:05:40.069693       8 log.go:172] (0xc00260a2c0) (0xc001bdf220) Stream added, broadcasting: 5
I0219 12:05:40.071431       8 log.go:172] (0xc00260a2c0) Reply frame received for 5
I0219 12:05:40.213374       8 log.go:172] (0xc00260a2c0) Data frame received for 3
I0219 12:05:40.213474       8 log.go:172] (0xc000595400) (3) Data frame handling
I0219 12:05:40.213507       8 log.go:172] (0xc000595400) (3) Data frame sent
I0219 12:05:40.366215       8 log.go:172] (0xc00260a2c0) (0xc000595400) Stream removed, broadcasting: 3
I0219 12:05:40.366336       8 log.go:172] (0xc00260a2c0) Data frame received for 1
I0219 12:05:40.366358       8 log.go:172] (0xc001bdf180) (1) Data frame handling
I0219 12:05:40.366382       8 log.go:172] (0xc001bdf180) (1) Data frame sent
I0219 12:05:40.366401       8 log.go:172] (0xc00260a2c0) (0xc001bdf180) Stream removed, broadcasting: 1
I0219 12:05:40.366441       8 log.go:172] (0xc00260a2c0) (0xc001bdf220) Stream removed, broadcasting: 5
I0219 12:05:40.366466       8 log.go:172] (0xc00260a2c0) Go away received
I0219 12:05:40.366649       8 log.go:172] (0xc00260a2c0) (0xc001bdf180) Stream removed, broadcasting: 1
I0219 12:05:40.366661       8 log.go:172] (0xc00260a2c0) (0xc000595400) Stream removed, broadcasting: 3
I0219 12:05:40.366671       8 log.go:172] (0xc00260a2c0) (0xc001bdf220) Stream removed, broadcasting: 5
Feb 19 12:05:40.366: INFO: Exec stderr: ""
Feb 19 12:05:40.366: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-76477 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 19 12:05:40.366: INFO: >>> kubeConfig: /root/.kube/config
I0219 12:05:40.500373       8 log.go:172] (0xc00260a790) (0xc001bdf4a0) Create stream
I0219 12:05:40.500498       8 log.go:172] (0xc00260a790) (0xc001bdf4a0) Stream added, broadcasting: 1
I0219 12:05:40.515998       8 log.go:172] (0xc00260a790) Reply frame received for 1
I0219 12:05:40.516134       8 log.go:172] (0xc00260a790) (0xc001bdf540) Create stream
I0219 12:05:40.516156       8 log.go:172] (0xc00260a790) (0xc001bdf540) Stream added, broadcasting: 3
I0219 12:05:40.519253       8 log.go:172] (0xc00260a790) Reply frame received for 3
I0219 12:05:40.519451       8 log.go:172] (0xc00260a790) (0xc000cea0a0) Create stream
I0219 12:05:40.519482       8 log.go:172] (0xc00260a790) (0xc000cea0a0) Stream added, broadcasting: 5
I0219 12:05:40.520862       8 log.go:172] (0xc00260a790) Reply frame received for 5
I0219 12:05:40.835516       8 log.go:172] (0xc00260a790) Data frame received for 3
I0219 12:05:40.835629       8 log.go:172] (0xc001bdf540) (3) Data frame handling
I0219 12:05:40.835692       8 log.go:172] (0xc001bdf540) (3) Data frame sent
I0219 12:05:41.082683       8 log.go:172] (0xc00260a790) Data frame received for 1
I0219 12:05:41.082776       8 log.go:172] (0xc00260a790) (0xc000cea0a0) Stream removed, broadcasting: 5
I0219 12:05:41.082828       8 log.go:172] (0xc001bdf4a0) (1) Data frame handling
I0219 12:05:41.082841       8 log.go:172] (0xc001bdf4a0) (1) Data frame sent
I0219 12:05:41.082869       8 log.go:172] (0xc00260a790) (0xc001bdf540) Stream removed, broadcasting: 3
I0219 12:05:41.082945       8 log.go:172] (0xc00260a790) (0xc001bdf4a0) Stream removed, broadcasting: 1
I0219 12:05:41.082983       8 log.go:172] (0xc00260a790) Go away received
I0219 12:05:41.083292       8 log.go:172] (0xc00260a790) (0xc001bdf4a0) Stream removed, broadcasting: 1
I0219 12:05:41.083313       8 log.go:172] (0xc00260a790) (0xc001bdf540) Stream removed, broadcasting: 3
I0219 12:05:41.083326       8 log.go:172] (0xc00260a790) (0xc000cea0a0) Stream removed, broadcasting: 5
Feb 19 12:05:41.083: INFO: Exec stderr: ""
Feb 19 12:05:41.083: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-76477 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 19 12:05:41.083: INFO: >>> kubeConfig: /root/.kube/config
I0219 12:05:41.156450       8 log.go:172] (0xc000b1ec60) (0xc000595f40) Create stream
I0219 12:05:41.156571       8 log.go:172] (0xc000b1ec60) (0xc000595f40) Stream added, broadcasting: 1
I0219 12:05:41.161033       8 log.go:172] (0xc000b1ec60) Reply frame received for 1
I0219 12:05:41.161066       8 log.go:172] (0xc000b1ec60) (0xc001bdf5e0) Create stream
I0219 12:05:41.161073       8 log.go:172] (0xc000b1ec60) (0xc001bdf5e0) Stream added, broadcasting: 3
I0219 12:05:41.161866       8 log.go:172] (0xc000b1ec60) Reply frame received for 3
I0219 12:05:41.161889       8 log.go:172] (0xc000b1ec60) (0xc001bdf680) Create stream
I0219 12:05:41.161903       8 log.go:172] (0xc000b1ec60) (0xc001bdf680) Stream added, broadcasting: 5
I0219 12:05:41.163496       8 log.go:172] (0xc000b1ec60) Reply frame received for 5
I0219 12:05:41.261984       8 log.go:172] (0xc000b1ec60) Data frame received for 3
I0219 12:05:41.262055       8 log.go:172] (0xc001bdf5e0) (3) Data frame handling
I0219 12:05:41.262083       8 log.go:172] (0xc001bdf5e0) (3) Data frame sent
I0219 12:05:41.388450       8 log.go:172] (0xc000b1ec60) Data frame received for 1
I0219 12:05:41.388533       8 log.go:172] (0xc000b1ec60) (0xc001bdf5e0) Stream removed, broadcasting: 3
I0219 12:05:41.388581       8 log.go:172] (0xc000595f40) (1) Data frame handling
I0219 12:05:41.388609       8 log.go:172] (0xc000595f40) (1) Data frame sent
I0219 12:05:41.388657       8 log.go:172] (0xc000b1ec60) (0xc001bdf680) Stream removed, broadcasting: 5
I0219 12:05:41.388914       8 log.go:172] (0xc000b1ec60) (0xc000595f40) Stream removed, broadcasting: 1
I0219 12:05:41.389053       8 log.go:172] (0xc000b1ec60) Go away received
I0219 12:05:41.389586       8 log.go:172] (0xc000b1ec60) (0xc000595f40) Stream removed, broadcasting: 1
I0219 12:05:41.389624       8 log.go:172] (0xc000b1ec60) (0xc001bdf5e0) Stream removed, broadcasting: 3
I0219 12:05:41.389643       8 log.go:172] (0xc000b1ec60) (0xc001bdf680) Stream removed, broadcasting: 5
Feb 19 12:05:41.389: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:05:41.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-76477" for this suite.
Feb 19 12:06:37.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:06:37.750: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-76477, resource: bindings, ignored listing per whitelist
Feb 19 12:06:37.764: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-76477 deletion completed in 56.358989846s

• [SLOW TEST:88.999 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:06:37.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0219 12:06:41.202274       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 19 12:06:41.202: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:06:41.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-2lqv8" for this suite.
Feb 19 12:06:47.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:06:47.860: INFO: namespace: e2e-tests-gc-2lqv8, resource: bindings, ignored listing per whitelist
Feb 19 12:06:47.949: INFO: namespace e2e-tests-gc-2lqv8 deletion completed in 6.742480833s

• [SLOW TEST:10.184 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:06:47.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 19 12:06:48.291: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 19 12:06:48.312: INFO: Number of nodes with available pods: 0
Feb 19 12:06:48.312: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 19 12:06:48.471: INFO: Number of nodes with available pods: 0
Feb 19 12:06:48.471: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:06:49.566: INFO: Number of nodes with available pods: 0
Feb 19 12:06:49.566: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:06:50.640: INFO: Number of nodes with available pods: 0
Feb 19 12:06:50.640: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:06:51.484: INFO: Number of nodes with available pods: 0
Feb 19 12:06:51.484: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:06:52.488: INFO: Number of nodes with available pods: 0
Feb 19 12:06:52.488: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:06:54.170: INFO: Number of nodes with available pods: 0
Feb 19 12:06:54.170: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:06:54.728: INFO: Number of nodes with available pods: 0
Feb 19 12:06:54.729: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:06:55.482: INFO: Number of nodes with available pods: 0
Feb 19 12:06:55.482: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:06:56.559: INFO: Number of nodes with available pods: 0
Feb 19 12:06:56.559: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:06:57.493: INFO: Number of nodes with available pods: 1
Feb 19 12:06:57.493: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 19 12:06:57.556: INFO: Number of nodes with available pods: 1
Feb 19 12:06:57.556: INFO: Number of running nodes: 0, number of available pods: 1
Feb 19 12:06:58.599: INFO: Number of nodes with available pods: 0
Feb 19 12:06:58.599: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 19 12:06:58.630: INFO: Number of nodes with available pods: 0
Feb 19 12:06:58.630: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:06:59.668: INFO: Number of nodes with available pods: 0
Feb 19 12:06:59.668: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:00.689: INFO: Number of nodes with available pods: 0
Feb 19 12:07:00.689: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:02.061: INFO: Number of nodes with available pods: 0
Feb 19 12:07:02.061: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:02.643: INFO: Number of nodes with available pods: 0
Feb 19 12:07:02.643: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:03.722: INFO: Number of nodes with available pods: 0
Feb 19 12:07:03.722: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:04.673: INFO: Number of nodes with available pods: 0
Feb 19 12:07:04.673: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:05.648: INFO: Number of nodes with available pods: 0
Feb 19 12:07:05.648: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:06.657: INFO: Number of nodes with available pods: 0
Feb 19 12:07:06.657: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:07.647: INFO: Number of nodes with available pods: 0
Feb 19 12:07:07.647: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:08.654: INFO: Number of nodes with available pods: 0
Feb 19 12:07:08.654: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:09.668: INFO: Number of nodes with available pods: 0
Feb 19 12:07:09.668: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:10.654: INFO: Number of nodes with available pods: 0
Feb 19 12:07:10.654: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:11.651: INFO: Number of nodes with available pods: 0
Feb 19 12:07:11.651: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:12.686: INFO: Number of nodes with available pods: 0
Feb 19 12:07:12.686: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:13.643: INFO: Number of nodes with available pods: 0
Feb 19 12:07:13.643: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:14.667: INFO: Number of nodes with available pods: 0
Feb 19 12:07:14.667: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:15.646: INFO: Number of nodes with available pods: 0
Feb 19 12:07:15.646: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:16.655: INFO: Number of nodes with available pods: 0
Feb 19 12:07:16.655: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:17.983: INFO: Number of nodes with available pods: 0
Feb 19 12:07:17.983: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:18.673: INFO: Number of nodes with available pods: 0
Feb 19 12:07:18.673: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:19.653: INFO: Number of nodes with available pods: 0
Feb 19 12:07:19.653: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:20.657: INFO: Number of nodes with available pods: 0
Feb 19 12:07:20.657: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:21.650: INFO: Number of nodes with available pods: 0
Feb 19 12:07:21.650: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:07:22.641: INFO: Number of nodes with available pods: 1
Feb 19 12:07:22.641: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bdjlx, will wait for the garbage collector to delete the pods
Feb 19 12:07:22.710: INFO: Deleting DaemonSet.extensions daemon-set took: 8.832446ms
Feb 19 12:07:22.910: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.465134ms
Feb 19 12:07:29.021: INFO: Number of nodes with available pods: 0
Feb 19 12:07:29.021: INFO: Number of running nodes: 0, number of available pods: 0
Feb 19 12:07:29.028: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bdjlx/daemonsets","resourceVersion":"22197192"},"items":null}

Feb 19 12:07:29.031: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bdjlx/pods","resourceVersion":"22197192"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:07:29.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-bdjlx" for this suite.
Feb 19 12:07:35.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:07:35.441: INFO: namespace: e2e-tests-daemonsets-bdjlx, resource: bindings, ignored listing per whitelist
Feb 19 12:07:35.535: INFO: namespace e2e-tests-daemonsets-bdjlx deletion completed in 6.360275527s

• [SLOW TEST:47.586 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:07:35.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb 19 12:07:35.774: INFO: Waiting up to 5m0s for pod "client-containers-6a60ad0d-5310-11ea-a0a3-0242ac110008" in namespace "e2e-tests-containers-flh2x" to be "success or failure"
Feb 19 12:07:35.788: INFO: Pod "client-containers-6a60ad0d-5310-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.768225ms
Feb 19 12:07:38.238: INFO: Pod "client-containers-6a60ad0d-5310-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463314187s
Feb 19 12:07:40.250: INFO: Pod "client-containers-6a60ad0d-5310-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475066582s
Feb 19 12:07:44.249: INFO: Pod "client-containers-6a60ad0d-5310-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.474019809s
Feb 19 12:07:46.267: INFO: Pod "client-containers-6a60ad0d-5310-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.492734828s
Feb 19 12:07:48.284: INFO: Pod "client-containers-6a60ad0d-5310-11ea-a0a3-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 12.509495768s
Feb 19 12:07:50.299: INFO: Pod "client-containers-6a60ad0d-5310-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.524178695s
STEP: Saw pod success
Feb 19 12:07:50.299: INFO: Pod "client-containers-6a60ad0d-5310-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:07:50.305: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-6a60ad0d-5310-11ea-a0a3-0242ac110008 container test-container: 
STEP: delete the pod
Feb 19 12:07:50.980: INFO: Waiting for pod client-containers-6a60ad0d-5310-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:07:50.991: INFO: Pod client-containers-6a60ad0d-5310-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:07:50.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-flh2x" for this suite.
Feb 19 12:07:59.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:07:59.282: INFO: namespace: e2e-tests-containers-flh2x, resource: bindings, ignored listing per whitelist
Feb 19 12:07:59.326: INFO: namespace e2e-tests-containers-flh2x deletion completed in 8.328380376s

• [SLOW TEST:23.791 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:07:59.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-mgjzj
Feb 19 12:08:07.736: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-mgjzj
STEP: checking the pod's current state and verifying that restartCount is present
Feb 19 12:08:07.742: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:12:09.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-mgjzj" for this suite.
Feb 19 12:12:18.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:12:18.256: INFO: namespace: e2e-tests-container-probe-mgjzj, resource: bindings, ignored listing per whitelist
Feb 19 12:12:18.355: INFO: namespace e2e-tests-container-probe-mgjzj deletion completed in 8.432453983s

• [SLOW TEST:259.028 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:12:18.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-13017d11-5311-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 19 12:12:18.799: INFO: Waiting up to 5m0s for pod "pod-secrets-1310abce-5311-11ea-a0a3-0242ac110008" in namespace "e2e-tests-secrets-m6bnx" to be "success or failure"
Feb 19 12:12:18.854: INFO: Pod "pod-secrets-1310abce-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 55.146908ms
Feb 19 12:12:20.876: INFO: Pod "pod-secrets-1310abce-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077336415s
Feb 19 12:12:22.900: INFO: Pod "pod-secrets-1310abce-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100784992s
Feb 19 12:12:25.028: INFO: Pod "pod-secrets-1310abce-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22901813s
Feb 19 12:12:27.039: INFO: Pod "pod-secrets-1310abce-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.240373424s
Feb 19 12:12:29.054: INFO: Pod "pod-secrets-1310abce-5311-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.255246603s
STEP: Saw pod success
Feb 19 12:12:29.054: INFO: Pod "pod-secrets-1310abce-5311-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:12:29.064: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1310abce-5311-11ea-a0a3-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 19 12:12:29.182: INFO: Waiting for pod pod-secrets-1310abce-5311-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:12:29.260: INFO: Pod pod-secrets-1310abce-5311-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:12:29.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-m6bnx" for this suite.
Feb 19 12:12:35.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:12:35.446: INFO: namespace: e2e-tests-secrets-m6bnx, resource: bindings, ignored listing per whitelist
Feb 19 12:12:35.494: INFO: namespace e2e-tests-secrets-m6bnx deletion completed in 6.219290737s

• [SLOW TEST:17.139 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:12:35.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 19 12:12:35.739: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:12:57.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-b7t4k" for this suite.
Feb 19 12:13:04.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:13:04.192: INFO: namespace: e2e-tests-init-container-b7t4k, resource: bindings, ignored listing per whitelist
Feb 19 12:13:04.218: INFO: namespace e2e-tests-init-container-b7t4k deletion completed in 6.30154348s

• [SLOW TEST:28.724 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:13:04.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 19 12:13:04.488: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4ndwt,SelfLink:/api/v1/namespaces/e2e-tests-watch-4ndwt/configmaps/e2e-watch-test-configmap-a,UID:2e4f9752-5311-11ea-a994-fa163e34d433,ResourceVersion:22197715,Generation:0,CreationTimestamp:2020-02-19 12:13:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 19 12:13:04.488: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4ndwt,SelfLink:/api/v1/namespaces/e2e-tests-watch-4ndwt/configmaps/e2e-watch-test-configmap-a,UID:2e4f9752-5311-11ea-a994-fa163e34d433,ResourceVersion:22197715,Generation:0,CreationTimestamp:2020-02-19 12:13:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 19 12:13:14.661: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4ndwt,SelfLink:/api/v1/namespaces/e2e-tests-watch-4ndwt/configmaps/e2e-watch-test-configmap-a,UID:2e4f9752-5311-11ea-a994-fa163e34d433,ResourceVersion:22197728,Generation:0,CreationTimestamp:2020-02-19 12:13:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 19 12:13:14.662: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4ndwt,SelfLink:/api/v1/namespaces/e2e-tests-watch-4ndwt/configmaps/e2e-watch-test-configmap-a,UID:2e4f9752-5311-11ea-a994-fa163e34d433,ResourceVersion:22197728,Generation:0,CreationTimestamp:2020-02-19 12:13:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 19 12:13:24.731: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4ndwt,SelfLink:/api/v1/namespaces/e2e-tests-watch-4ndwt/configmaps/e2e-watch-test-configmap-a,UID:2e4f9752-5311-11ea-a994-fa163e34d433,ResourceVersion:22197740,Generation:0,CreationTimestamp:2020-02-19 12:13:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 19 12:13:24.731: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4ndwt,SelfLink:/api/v1/namespaces/e2e-tests-watch-4ndwt/configmaps/e2e-watch-test-configmap-a,UID:2e4f9752-5311-11ea-a994-fa163e34d433,ResourceVersion:22197740,Generation:0,CreationTimestamp:2020-02-19 12:13:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 19 12:13:34.755: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4ndwt,SelfLink:/api/v1/namespaces/e2e-tests-watch-4ndwt/configmaps/e2e-watch-test-configmap-a,UID:2e4f9752-5311-11ea-a994-fa163e34d433,ResourceVersion:22197753,Generation:0,CreationTimestamp:2020-02-19 12:13:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 19 12:13:34.756: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4ndwt,SelfLink:/api/v1/namespaces/e2e-tests-watch-4ndwt/configmaps/e2e-watch-test-configmap-a,UID:2e4f9752-5311-11ea-a994-fa163e34d433,ResourceVersion:22197753,Generation:0,CreationTimestamp:2020-02-19 12:13:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 19 12:13:44.789: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-4ndwt,SelfLink:/api/v1/namespaces/e2e-tests-watch-4ndwt/configmaps/e2e-watch-test-configmap-b,UID:46534ee5-5311-11ea-a994-fa163e34d433,ResourceVersion:22197765,Generation:0,CreationTimestamp:2020-02-19 12:13:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 19 12:13:44.789: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-4ndwt,SelfLink:/api/v1/namespaces/e2e-tests-watch-4ndwt/configmaps/e2e-watch-test-configmap-b,UID:46534ee5-5311-11ea-a994-fa163e34d433,ResourceVersion:22197765,Generation:0,CreationTimestamp:2020-02-19 12:13:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 19 12:13:54.812: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-4ndwt,SelfLink:/api/v1/namespaces/e2e-tests-watch-4ndwt/configmaps/e2e-watch-test-configmap-b,UID:46534ee5-5311-11ea-a994-fa163e34d433,ResourceVersion:22197778,Generation:0,CreationTimestamp:2020-02-19 12:13:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 19 12:13:54.812: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-4ndwt,SelfLink:/api/v1/namespaces/e2e-tests-watch-4ndwt/configmaps/e2e-watch-test-configmap-b,UID:46534ee5-5311-11ea-a994-fa163e34d433,ResourceVersion:22197778,Generation:0,CreationTimestamp:2020-02-19 12:13:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:14:04.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-4ndwt" for this suite.
Feb 19 12:14:10.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:14:10.929: INFO: namespace: e2e-tests-watch-4ndwt, resource: bindings, ignored listing per whitelist
Feb 19 12:14:11.030: INFO: namespace e2e-tests-watch-4ndwt deletion completed in 6.202279671s

• [SLOW TEST:66.811 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:14:11.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-560d27f0-5311-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 19 12:14:11.237: INFO: Waiting up to 5m0s for pod "pod-secrets-5617e831-5311-11ea-a0a3-0242ac110008" in namespace "e2e-tests-secrets-5dkls" to be "success or failure"
Feb 19 12:14:11.245: INFO: Pod "pod-secrets-5617e831-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.964882ms
Feb 19 12:14:13.266: INFO: Pod "pod-secrets-5617e831-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028822013s
Feb 19 12:14:15.281: INFO: Pod "pod-secrets-5617e831-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04382672s
Feb 19 12:14:17.778: INFO: Pod "pod-secrets-5617e831-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.540846188s
Feb 19 12:14:19.799: INFO: Pod "pod-secrets-5617e831-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561326087s
Feb 19 12:14:21.812: INFO: Pod "pod-secrets-5617e831-5311-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.575025259s
STEP: Saw pod success
Feb 19 12:14:21.813: INFO: Pod "pod-secrets-5617e831-5311-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:14:21.826: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5617e831-5311-11ea-a0a3-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 19 12:14:22.675: INFO: Waiting for pod pod-secrets-5617e831-5311-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:14:22.717: INFO: Pod pod-secrets-5617e831-5311-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:14:22.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5dkls" for this suite.
Feb 19 12:14:28.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:14:29.007: INFO: namespace: e2e-tests-secrets-5dkls, resource: bindings, ignored listing per whitelist
Feb 19 12:14:29.162: INFO: namespace e2e-tests-secrets-5dkls deletion completed in 6.431557183s

• [SLOW TEST:18.131 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:14:29.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-vhczk/configmap-test-60e9b054-5311-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 19 12:14:29.392: INFO: Waiting up to 5m0s for pod "pod-configmaps-60eaedd8-5311-11ea-a0a3-0242ac110008" in namespace "e2e-tests-configmap-vhczk" to be "success or failure"
Feb 19 12:14:29.399: INFO: Pod "pod-configmaps-60eaedd8-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.177607ms
Feb 19 12:14:31.749: INFO: Pod "pod-configmaps-60eaedd8-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.356865259s
Feb 19 12:14:33.763: INFO: Pod "pod-configmaps-60eaedd8-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370775126s
Feb 19 12:14:35.975: INFO: Pod "pod-configmaps-60eaedd8-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583481551s
Feb 19 12:14:38.012: INFO: Pod "pod-configmaps-60eaedd8-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.61973279s
Feb 19 12:14:40.024: INFO: Pod "pod-configmaps-60eaedd8-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.632638231s
Feb 19 12:14:42.252: INFO: Pod "pod-configmaps-60eaedd8-5311-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.859994586s
STEP: Saw pod success
Feb 19 12:14:42.252: INFO: Pod "pod-configmaps-60eaedd8-5311-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:14:42.266: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-60eaedd8-5311-11ea-a0a3-0242ac110008 container env-test: 
STEP: delete the pod
Feb 19 12:14:42.881: INFO: Waiting for pod pod-configmaps-60eaedd8-5311-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:14:42.902: INFO: Pod pod-configmaps-60eaedd8-5311-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:14:42.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vhczk" for this suite.
Feb 19 12:14:48.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:14:49.073: INFO: namespace: e2e-tests-configmap-vhczk, resource: bindings, ignored listing per whitelist
Feb 19 12:14:49.131: INFO: namespace e2e-tests-configmap-vhczk deletion completed in 6.221176159s

• [SLOW TEST:19.970 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:14:49.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-6ccde57f-5311-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 19 12:14:49.334: INFO: Waiting up to 5m0s for pod "pod-configmaps-6cce936a-5311-11ea-a0a3-0242ac110008" in namespace "e2e-tests-configmap-k4s48" to be "success or failure"
Feb 19 12:14:49.352: INFO: Pod "pod-configmaps-6cce936a-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.134316ms
Feb 19 12:14:51.711: INFO: Pod "pod-configmaps-6cce936a-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376949709s
Feb 19 12:14:53.723: INFO: Pod "pod-configmaps-6cce936a-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.38853662s
Feb 19 12:14:56.143: INFO: Pod "pod-configmaps-6cce936a-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.809066955s
Feb 19 12:14:58.166: INFO: Pod "pod-configmaps-6cce936a-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.831891118s
Feb 19 12:15:00.181: INFO: Pod "pod-configmaps-6cce936a-5311-11ea-a0a3-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 10.846688206s
Feb 19 12:15:02.553: INFO: Pod "pod-configmaps-6cce936a-5311-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.21849937s
STEP: Saw pod success
Feb 19 12:15:02.553: INFO: Pod "pod-configmaps-6cce936a-5311-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:15:02.561: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6cce936a-5311-11ea-a0a3-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 19 12:15:02.849: INFO: Waiting for pod pod-configmaps-6cce936a-5311-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:15:02.889: INFO: Pod pod-configmaps-6cce936a-5311-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:15:02.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-k4s48" for this suite.
Feb 19 12:15:08.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:15:08.983: INFO: namespace: e2e-tests-configmap-k4s48, resource: bindings, ignored listing per whitelist
Feb 19 12:15:09.165: INFO: namespace e2e-tests-configmap-k4s48 deletion completed in 6.239048915s

• [SLOW TEST:20.031 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:15:09.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 19 12:15:09.418: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78c5bb96-5311-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-6gdph" to be "success or failure"
Feb 19 12:15:09.425: INFO: Pod "downwardapi-volume-78c5bb96-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.345482ms
Feb 19 12:15:11.438: INFO: Pod "downwardapi-volume-78c5bb96-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020538064s
Feb 19 12:15:13.452: INFO: Pod "downwardapi-volume-78c5bb96-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034209079s
Feb 19 12:15:15.469: INFO: Pod "downwardapi-volume-78c5bb96-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051530894s
Feb 19 12:15:17.485: INFO: Pod "downwardapi-volume-78c5bb96-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067167152s
Feb 19 12:15:19.501: INFO: Pod "downwardapi-volume-78c5bb96-5311-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083121662s
STEP: Saw pod success
Feb 19 12:15:19.501: INFO: Pod "downwardapi-volume-78c5bb96-5311-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:15:19.505: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-78c5bb96-5311-11ea-a0a3-0242ac110008 container client-container: 
STEP: delete the pod
Feb 19 12:15:20.290: INFO: Waiting for pod downwardapi-volume-78c5bb96-5311-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:15:20.648: INFO: Pod downwardapi-volume-78c5bb96-5311-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:15:20.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6gdph" for this suite.
Feb 19 12:15:27.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:15:27.741: INFO: namespace: e2e-tests-projected-6gdph, resource: bindings, ignored listing per whitelist
Feb 19 12:15:27.754: INFO: namespace e2e-tests-projected-6gdph deletion completed in 7.090624481s

• [SLOW TEST:18.588 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:15:27.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:16:27.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-8hxt6" for this suite.
Feb 19 12:16:33.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:16:33.397: INFO: namespace: e2e-tests-container-runtime-8hxt6, resource: bindings, ignored listing per whitelist
Feb 19 12:16:33.508: INFO: namespace e2e-tests-container-runtime-8hxt6 deletion completed in 6.355156155s

• [SLOW TEST:65.754 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:16:33.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Feb 19 12:16:33.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x8tvw'
Feb 19 12:16:37.797: INFO: stderr: ""
Feb 19 12:16:37.797: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Feb 19 12:16:39.442: INFO: Selector matched 1 pods for map[app:redis]
Feb 19 12:16:39.442: INFO: Found 0 / 1
Feb 19 12:16:39.901: INFO: Selector matched 1 pods for map[app:redis]
Feb 19 12:16:39.901: INFO: Found 0 / 1
Feb 19 12:16:40.827: INFO: Selector matched 1 pods for map[app:redis]
Feb 19 12:16:40.827: INFO: Found 0 / 1
Feb 19 12:16:41.851: INFO: Selector matched 1 pods for map[app:redis]
Feb 19 12:16:41.851: INFO: Found 0 / 1
Feb 19 12:16:42.990: INFO: Selector matched 1 pods for map[app:redis]
Feb 19 12:16:42.990: INFO: Found 0 / 1
Feb 19 12:16:43.853: INFO: Selector matched 1 pods for map[app:redis]
Feb 19 12:16:43.853: INFO: Found 0 / 1
Feb 19 12:16:44.849: INFO: Selector matched 1 pods for map[app:redis]
Feb 19 12:16:44.850: INFO: Found 0 / 1
Feb 19 12:16:45.832: INFO: Selector matched 1 pods for map[app:redis]
Feb 19 12:16:45.832: INFO: Found 0 / 1
Feb 19 12:16:46.858: INFO: Selector matched 1 pods for map[app:redis]
Feb 19 12:16:46.858: INFO: Found 1 / 1
Feb 19 12:16:46.858: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 19 12:16:46.868: INFO: Selector matched 1 pods for map[app:redis]
Feb 19 12:16:46.868: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 19 12:16:46.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g5mlt redis-master --namespace=e2e-tests-kubectl-x8tvw'
Feb 19 12:16:47.047: INFO: stderr: ""
Feb 19 12:16:47.048: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 19 Feb 12:16:45.470 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Feb 12:16:45.471 # Server started, Redis version 3.2.12\n1:M 19 Feb 12:16:45.471 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Feb 12:16:45.471 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 19 12:16:47.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-g5mlt redis-master --namespace=e2e-tests-kubectl-x8tvw --tail=1'
Feb 19 12:16:47.261: INFO: stderr: ""
Feb 19 12:16:47.261: INFO: stdout: "1:M 19 Feb 12:16:45.471 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 19 12:16:47.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-g5mlt redis-master --namespace=e2e-tests-kubectl-x8tvw --limit-bytes=1'
Feb 19 12:16:47.510: INFO: stderr: ""
Feb 19 12:16:47.510: INFO: stdout: " "
STEP: exposing timestamps
Feb 19 12:16:47.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-g5mlt redis-master --namespace=e2e-tests-kubectl-x8tvw --tail=1 --timestamps'
Feb 19 12:16:47.882: INFO: stderr: ""
Feb 19 12:16:47.882: INFO: stdout: "2020-02-19T12:16:45.472255195Z 1:M 19 Feb 12:16:45.471 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 19 12:16:50.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-g5mlt redis-master --namespace=e2e-tests-kubectl-x8tvw --since=1s'
Feb 19 12:16:50.639: INFO: stderr: ""
Feb 19 12:16:50.639: INFO: stdout: ""
Feb 19 12:16:50.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-g5mlt redis-master --namespace=e2e-tests-kubectl-x8tvw --since=24h'
Feb 19 12:16:50.792: INFO: stderr: ""
Feb 19 12:16:50.792: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 19 Feb 12:16:45.470 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Feb 12:16:45.471 # Server started, Redis version 3.2.12\n1:M 19 Feb 12:16:45.471 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Feb 12:16:45.471 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Feb 19 12:16:50.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-x8tvw'
Feb 19 12:16:50.924: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 19 12:16:50.925: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 19 12:16:50.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-x8tvw'
Feb 19 12:16:51.042: INFO: stderr: "No resources found.\n"
Feb 19 12:16:51.042: INFO: stdout: ""
Feb 19 12:16:51.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-x8tvw -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 19 12:16:51.281: INFO: stderr: ""
Feb 19 12:16:51.281: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:16:51.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x8tvw" for this suite.
Feb 19 12:17:15.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:17:15.433: INFO: namespace: e2e-tests-kubectl-x8tvw, resource: bindings, ignored listing per whitelist
Feb 19 12:17:15.518: INFO: namespace e2e-tests-kubectl-x8tvw deletion completed in 24.219895398s

• [SLOW TEST:42.010 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:17:15.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-c41fd851-5311-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 19 12:17:15.847: INFO: Waiting up to 5m0s for pod "pod-secrets-c42114b2-5311-11ea-a0a3-0242ac110008" in namespace "e2e-tests-secrets-zgc55" to be "success or failure"
Feb 19 12:17:15.862: INFO: Pod "pod-secrets-c42114b2-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.841759ms
Feb 19 12:17:18.149: INFO: Pod "pod-secrets-c42114b2-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301979092s
Feb 19 12:17:20.171: INFO: Pod "pod-secrets-c42114b2-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323816289s
Feb 19 12:17:22.212: INFO: Pod "pod-secrets-c42114b2-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.365484319s
Feb 19 12:17:24.230: INFO: Pod "pod-secrets-c42114b2-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.382762651s
Feb 19 12:17:26.256: INFO: Pod "pod-secrets-c42114b2-5311-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.409311828s
STEP: Saw pod success
Feb 19 12:17:26.256: INFO: Pod "pod-secrets-c42114b2-5311-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:17:26.266: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-c42114b2-5311-11ea-a0a3-0242ac110008 container secret-env-test: 
STEP: delete the pod
Feb 19 12:17:26.398: INFO: Waiting for pod pod-secrets-c42114b2-5311-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:17:26.447: INFO: Pod pod-secrets-c42114b2-5311-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:17:26.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zgc55" for this suite.
Feb 19 12:17:32.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:17:33.048: INFO: namespace: e2e-tests-secrets-zgc55, resource: bindings, ignored listing per whitelist
Feb 19 12:17:33.057: INFO: namespace e2e-tests-secrets-zgc55 deletion completed in 6.58503876s

• [SLOW TEST:17.538 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:17:33.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 19 12:17:33.204: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce7ad9ce-5311-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-7nttr" to be "success or failure"
Feb 19 12:17:33.209: INFO: Pod "downwardapi-volume-ce7ad9ce-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.346169ms
Feb 19 12:17:35.223: INFO: Pod "downwardapi-volume-ce7ad9ce-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018912759s
Feb 19 12:17:37.235: INFO: Pod "downwardapi-volume-ce7ad9ce-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03126434s
Feb 19 12:17:39.801: INFO: Pod "downwardapi-volume-ce7ad9ce-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596856069s
Feb 19 12:17:41.816: INFO: Pod "downwardapi-volume-ce7ad9ce-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.612573501s
Feb 19 12:17:43.834: INFO: Pod "downwardapi-volume-ce7ad9ce-5311-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.630611346s
STEP: Saw pod success
Feb 19 12:17:43.834: INFO: Pod "downwardapi-volume-ce7ad9ce-5311-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:17:43.845: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ce7ad9ce-5311-11ea-a0a3-0242ac110008 container client-container: 
STEP: delete the pod
Feb 19 12:17:44.417: INFO: Waiting for pod downwardapi-volume-ce7ad9ce-5311-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:17:44.456: INFO: Pod downwardapi-volume-ce7ad9ce-5311-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:17:44.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7nttr" for this suite.
Feb 19 12:17:50.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:17:50.855: INFO: namespace: e2e-tests-projected-7nttr, resource: bindings, ignored listing per whitelist
Feb 19 12:17:51.117: INFO: namespace e2e-tests-projected-7nttr deletion completed in 6.467034669s

• [SLOW TEST:18.059 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:17:51.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 19 12:17:51.782: INFO: Waiting up to 5m0s for pod "pod-d988bc34-5311-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-9fv22" to be "success or failure"
Feb 19 12:17:51.819: INFO: Pod "pod-d988bc34-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 36.38713ms
Feb 19 12:17:53.938: INFO: Pod "pod-d988bc34-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155865753s
Feb 19 12:17:55.954: INFO: Pod "pod-d988bc34-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17112628s
Feb 19 12:17:58.643: INFO: Pod "pod-d988bc34-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.860347065s
Feb 19 12:18:00.658: INFO: Pod "pod-d988bc34-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.875071182s
Feb 19 12:18:02.691: INFO: Pod "pod-d988bc34-5311-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.908198468s
STEP: Saw pod success
Feb 19 12:18:02.691: INFO: Pod "pod-d988bc34-5311-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:18:02.701: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d988bc34-5311-11ea-a0a3-0242ac110008 container test-container: 
STEP: delete the pod
Feb 19 12:18:02.818: INFO: Waiting for pod pod-d988bc34-5311-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:18:02.830: INFO: Pod pod-d988bc34-5311-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:18:02.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9fv22" for this suite.
Feb 19 12:18:09.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:18:09.182: INFO: namespace: e2e-tests-emptydir-9fv22, resource: bindings, ignored listing per whitelist
Feb 19 12:18:09.336: INFO: namespace e2e-tests-emptydir-9fv22 deletion completed in 6.493464815s

• [SLOW TEST:18.219 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:18:09.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 19 12:18:09.733: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e43cbbd1-5311-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-blhl6" to be "success or failure"
Feb 19 12:18:09.876: INFO: Pod "downwardapi-volume-e43cbbd1-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 143.269638ms
Feb 19 12:18:11.893: INFO: Pod "downwardapi-volume-e43cbbd1-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160625104s
Feb 19 12:18:13.905: INFO: Pod "downwardapi-volume-e43cbbd1-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172158207s
Feb 19 12:18:16.012: INFO: Pod "downwardapi-volume-e43cbbd1-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279366044s
Feb 19 12:18:18.072: INFO: Pod "downwardapi-volume-e43cbbd1-5311-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.33923687s
Feb 19 12:18:20.292: INFO: Pod "downwardapi-volume-e43cbbd1-5311-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.558921716s
STEP: Saw pod success
Feb 19 12:18:20.292: INFO: Pod "downwardapi-volume-e43cbbd1-5311-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:18:20.303: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e43cbbd1-5311-11ea-a0a3-0242ac110008 container client-container: 
STEP: delete the pod
Feb 19 12:18:20.709: INFO: Waiting for pod downwardapi-volume-e43cbbd1-5311-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:18:20.724: INFO: Pod downwardapi-volume-e43cbbd1-5311-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:18:20.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-blhl6" for this suite.
Feb 19 12:18:26.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:18:26.824: INFO: namespace: e2e-tests-projected-blhl6, resource: bindings, ignored listing per whitelist
Feb 19 12:18:26.934: INFO: namespace e2e-tests-projected-blhl6 deletion completed in 6.201859276s

• [SLOW TEST:17.598 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:18:26.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 19 12:18:27.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-j976n'
Feb 19 12:18:27.280: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 19 12:18:27.280: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb 19 12:18:27.293: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb 19 12:18:27.413: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 19 12:18:27.467: INFO: scanned /root for discovery docs: 
Feb 19 12:18:27.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-j976n'
Feb 19 12:18:55.090: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 19 12:18:55.091: INFO: stdout: "Created e2e-test-nginx-rc-0b8926db4e802ec70bb316a0b03a724b\nScaling up e2e-test-nginx-rc-0b8926db4e802ec70bb316a0b03a724b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0b8926db4e802ec70bb316a0b03a724b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0b8926db4e802ec70bb316a0b03a724b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 19 12:18:55.091: INFO: stdout: "Created e2e-test-nginx-rc-0b8926db4e802ec70bb316a0b03a724b\nScaling up e2e-test-nginx-rc-0b8926db4e802ec70bb316a0b03a724b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0b8926db4e802ec70bb316a0b03a724b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0b8926db4e802ec70bb316a0b03a724b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 19 12:18:55.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-j976n'
Feb 19 12:18:55.283: INFO: stderr: ""
Feb 19 12:18:55.283: INFO: stdout: "e2e-test-nginx-rc-0b8926db4e802ec70bb316a0b03a724b-qbzxl "
Feb 19 12:18:55.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0b8926db4e802ec70bb316a0b03a724b-qbzxl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j976n'
Feb 19 12:18:55.457: INFO: stderr: ""
Feb 19 12:18:55.457: INFO: stdout: "true"
Feb 19 12:18:55.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0b8926db4e802ec70bb316a0b03a724b-qbzxl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j976n'
Feb 19 12:18:55.563: INFO: stderr: ""
Feb 19 12:18:55.564: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 19 12:18:55.564: INFO: e2e-test-nginx-rc-0b8926db4e802ec70bb316a0b03a724b-qbzxl is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Feb 19 12:18:55.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-j976n'
Feb 19 12:18:55.743: INFO: stderr: ""
Feb 19 12:18:55.743: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:18:55.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-j976n" for this suite.
Feb 19 12:19:19.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:19:20.021: INFO: namespace: e2e-tests-kubectl-j976n, resource: bindings, ignored listing per whitelist
Feb 19 12:19:20.187: INFO: namespace e2e-tests-kubectl-j976n deletion completed in 24.424560255s

• [SLOW TEST:53.253 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:19:20.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 19 12:19:20.583: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 19 12:19:20.630: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 19 12:19:25.694: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 19 12:19:31.718: INFO: Creating deployment "test-rolling-update-deployment"
Feb 19 12:19:31.740: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 19 12:19:31.774: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 19 12:19:33.832: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 19 12:19:33.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711571, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711571, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711572, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711571, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 19 12:19:35.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711571, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711571, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711572, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711571, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 19 12:19:38.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711571, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711571, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711572, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711571, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 19 12:19:39.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711571, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711571, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711572, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717711571, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 19 12:19:42.251: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 19 12:19:42.464: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-2jktx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2jktx/deployments/test-rolling-update-deployment,UID:1520e1c1-5312-11ea-a994-fa163e34d433,ResourceVersion:22198610,Generation:1,CreationTimestamp:2020-02-19 12:19:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-19 12:19:31 +0000 UTC 2020-02-19 12:19:31 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-19 12:19:40 +0000 UTC 2020-02-19 12:19:31 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 19 12:19:42.478: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-2jktx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2jktx/replicasets/test-rolling-update-deployment-75db98fb4c,UID:153ba3c7-5312-11ea-a994-fa163e34d433,ResourceVersion:22198601,Generation:1,CreationTimestamp:2020-02-19 12:19:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 1520e1c1-5312-11ea-a994-fa163e34d433 0xc0017e6927 0xc0017e6928}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 19 12:19:42.479: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 19 12:19:42.480: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-2jktx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2jktx/replicasets/test-rolling-update-controller,UID:0e7e3772-5312-11ea-a994-fa163e34d433,ResourceVersion:22198609,Generation:2,CreationTimestamp:2020-02-19 12:19:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 1520e1c1-5312-11ea-a994-fa163e34d433 0xc0017e6817 0xc0017e6818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 19 12:19:42.512: INFO: Pod "test-rolling-update-deployment-75db98fb4c-xg745" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-xg745,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-2jktx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2jktx/pods/test-rolling-update-deployment-75db98fb4c-xg745,UID:15441407-5312-11ea-a994-fa163e34d433,ResourceVersion:22198600,Generation:0,CreationTimestamp:2020-02-19 12:19:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 153ba3c7-5312-11ea-a994-fa163e34d433 0xc0004b1427 0xc0004b1428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xpbbf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xpbbf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-xpbbf true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0004b1560} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0004b1610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 12:19:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 12:19:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 12:19:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 12:19:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-19 12:19:32 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-19 12:19:40 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://5a37585d0be4e2906fcba7ddc037290462f3a6f8a09cc9201b1874156fe9014b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:19:42.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-2jktx" for this suite.
Feb 19 12:19:50.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:19:50.777: INFO: namespace: e2e-tests-deployment-2jktx, resource: bindings, ignored listing per whitelist
Feb 19 12:19:50.823: INFO: namespace e2e-tests-deployment-2jktx deletion completed in 8.278145384s

• [SLOW TEST:30.636 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:19:50.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-m84db
Feb 19 12:20:02.220: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-m84db
STEP: checking the pod's current state and verifying that restartCount is present
Feb 19 12:20:02.224: INFO: Initial restart count of pod liveness-http is 0
Feb 19 12:20:22.377: INFO: Restart count of pod e2e-tests-container-probe-m84db/liveness-http is now 1 (20.152902964s elapsed)
Feb 19 12:20:42.835: INFO: Restart count of pod e2e-tests-container-probe-m84db/liveness-http is now 2 (40.61115014s elapsed)
Feb 19 12:21:03.696: INFO: Restart count of pod e2e-tests-container-probe-m84db/liveness-http is now 3 (1m1.472100297s elapsed)
Feb 19 12:21:22.148: INFO: Restart count of pod e2e-tests-container-probe-m84db/liveness-http is now 4 (1m19.923550561s elapsed)
Feb 19 12:22:21.364: INFO: Restart count of pod e2e-tests-container-probe-m84db/liveness-http is now 5 (2m19.140278585s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:22:21.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-m84db" for this suite.
Feb 19 12:22:27.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:22:27.701: INFO: namespace: e2e-tests-container-probe-m84db, resource: bindings, ignored listing per whitelist
Feb 19 12:22:27.756: INFO: namespace e2e-tests-container-probe-m84db deletion completed in 6.227277714s

• [SLOW TEST:156.933 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:22:27.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 19 12:22:27.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 19 12:22:28.116: INFO: stderr: ""
Feb 19 12:22:28.116: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:22:28.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-99k2n" for this suite.
Feb 19 12:22:34.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:22:34.391: INFO: namespace: e2e-tests-kubectl-99k2n, resource: bindings, ignored listing per whitelist
Feb 19 12:22:34.427: INFO: namespace e2e-tests-kubectl-99k2n deletion completed in 6.298410129s

• [SLOW TEST:6.670 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:22:34.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 19 12:22:53.124: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:22:53.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-7k7h4" for this suite.
Feb 19 12:23:21.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:23:21.950: INFO: namespace: e2e-tests-replicaset-7k7h4, resource: bindings, ignored listing per whitelist
Feb 19 12:23:21.973: INFO: namespace e2e-tests-replicaset-7k7h4 deletion completed in 28.602211959s

• [SLOW TEST:47.546 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:23:21.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 19 12:23:22.337: INFO: Waiting up to 5m0s for pod "pod-9e893fa2-5312-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-d75fh" to be "success or failure"
Feb 19 12:23:22.434: INFO: Pod "pod-9e893fa2-5312-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 96.209881ms
Feb 19 12:23:24.828: INFO: Pod "pod-9e893fa2-5312-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490325627s
Feb 19 12:23:27.709: INFO: Pod "pod-9e893fa2-5312-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.371392709s
Feb 19 12:23:29.732: INFO: Pod "pod-9e893fa2-5312-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.394513197s
Feb 19 12:23:31.747: INFO: Pod "pod-9e893fa2-5312-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.409790713s
Feb 19 12:23:33.757: INFO: Pod "pod-9e893fa2-5312-11ea-a0a3-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 11.4197569s
Feb 19 12:23:35.774: INFO: Pod "pod-9e893fa2-5312-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.436280038s
STEP: Saw pod success
Feb 19 12:23:35.774: INFO: Pod "pod-9e893fa2-5312-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:23:35.791: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9e893fa2-5312-11ea-a0a3-0242ac110008 container test-container: 
STEP: delete the pod
Feb 19 12:23:35.944: INFO: Waiting for pod pod-9e893fa2-5312-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:23:35.957: INFO: Pod pod-9e893fa2-5312-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:23:35.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-d75fh" for this suite.
Feb 19 12:23:42.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:23:42.187: INFO: namespace: e2e-tests-emptydir-d75fh, resource: bindings, ignored listing per whitelist
Feb 19 12:23:42.359: INFO: namespace e2e-tests-emptydir-d75fh deletion completed in 6.391509979s

• [SLOW TEST:20.386 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:23:42.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-2pwp6
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-2pwp6
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-2pwp6
Feb 19 12:23:42.678: INFO: Found 0 stateful pods, waiting for 1
Feb 19 12:23:52.697: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 19 12:23:52.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 19 12:23:53.324: INFO: stderr: "I0219 12:23:52.934017    2274 log.go:172] (0xc0006e6370) (0xc000706640) Create stream\nI0219 12:23:52.934591    2274 log.go:172] (0xc0006e6370) (0xc000706640) Stream added, broadcasting: 1\nI0219 12:23:52.942080    2274 log.go:172] (0xc0006e6370) Reply frame received for 1\nI0219 12:23:52.942113    2274 log.go:172] (0xc0006e6370) (0xc00035adc0) Create stream\nI0219 12:23:52.942126    2274 log.go:172] (0xc0006e6370) (0xc00035adc0) Stream added, broadcasting: 3\nI0219 12:23:52.943614    2274 log.go:172] (0xc0006e6370) Reply frame received for 3\nI0219 12:23:52.943649    2274 log.go:172] (0xc0006e6370) (0xc000522000) Create stream\nI0219 12:23:52.943663    2274 log.go:172] (0xc0006e6370) (0xc000522000) Stream added, broadcasting: 5\nI0219 12:23:52.944907    2274 log.go:172] (0xc0006e6370) Reply frame received for 5\nI0219 12:23:53.149433    2274 log.go:172] (0xc0006e6370) Data frame received for 3\nI0219 12:23:53.149885    2274 log.go:172] (0xc00035adc0) (3) Data frame handling\nI0219 12:23:53.149908    2274 log.go:172] (0xc00035adc0) (3) Data frame sent\nI0219 12:23:53.315251    2274 log.go:172] (0xc0006e6370) (0xc00035adc0) Stream removed, broadcasting: 3\nI0219 12:23:53.315587    2274 log.go:172] (0xc0006e6370) Data frame received for 1\nI0219 12:23:53.315676    2274 log.go:172] (0xc0006e6370) (0xc000522000) Stream removed, broadcasting: 5\nI0219 12:23:53.315733    2274 log.go:172] (0xc000706640) (1) Data frame handling\nI0219 12:23:53.315749    2274 log.go:172] (0xc000706640) (1) Data frame sent\nI0219 12:23:53.315760    2274 log.go:172] (0xc0006e6370) (0xc000706640) Stream removed, broadcasting: 1\nI0219 12:23:53.315771    2274 log.go:172] (0xc0006e6370) Go away received\nI0219 12:23:53.316288    2274 log.go:172] (0xc0006e6370) (0xc000706640) Stream removed, broadcasting: 1\nI0219 12:23:53.316306    2274 log.go:172] (0xc0006e6370) (0xc00035adc0) Stream removed, broadcasting: 3\nI0219 12:23:53.316314    2274 log.go:172] (0xc0006e6370) (0xc000522000) Stream removed, broadcasting: 5\n"
Feb 19 12:23:53.324: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 19 12:23:53.324: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 19 12:23:53.353: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 19 12:24:03.377: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 19 12:24:03.377: INFO: Waiting for statefulset status.replicas updated to 0
Feb 19 12:24:03.504: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998637s
Feb 19 12:24:04.544: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.977014826s
Feb 19 12:24:05.566: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.937665921s
Feb 19 12:24:06.601: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.915672801s
Feb 19 12:24:07.619: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.880754652s
Feb 19 12:24:08.635: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.862597923s
Feb 19 12:24:09.658: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.847047507s
Feb 19 12:24:10.677: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.823662681s
Feb 19 12:24:11.709: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.8051488s
Feb 19 12:24:12.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 772.243531ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-2pwp6
Feb 19 12:24:13.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:24:14.438: INFO: stderr: "I0219 12:24:14.105522    2295 log.go:172] (0xc00014c580) (0xc00072c5a0) Create stream\nI0219 12:24:14.105987    2295 log.go:172] (0xc00014c580) (0xc00072c5a0) Stream added, broadcasting: 1\nI0219 12:24:14.113515    2295 log.go:172] (0xc00014c580) Reply frame received for 1\nI0219 12:24:14.113605    2295 log.go:172] (0xc00014c580) (0xc00062cbe0) Create stream\nI0219 12:24:14.113616    2295 log.go:172] (0xc00014c580) (0xc00062cbe0) Stream added, broadcasting: 3\nI0219 12:24:14.115116    2295 log.go:172] (0xc00014c580) Reply frame received for 3\nI0219 12:24:14.115164    2295 log.go:172] (0xc00014c580) (0xc00072c640) Create stream\nI0219 12:24:14.115173    2295 log.go:172] (0xc00014c580) (0xc00072c640) Stream added, broadcasting: 5\nI0219 12:24:14.116428    2295 log.go:172] (0xc00014c580) Reply frame received for 5\nI0219 12:24:14.281775    2295 log.go:172] (0xc00014c580) Data frame received for 3\nI0219 12:24:14.281834    2295 log.go:172] (0xc00062cbe0) (3) Data frame handling\nI0219 12:24:14.281866    2295 log.go:172] (0xc00062cbe0) (3) Data frame sent\nI0219 12:24:14.426525    2295 log.go:172] (0xc00014c580) Data frame received for 1\nI0219 12:24:14.426585    2295 log.go:172] (0xc00072c5a0) (1) Data frame handling\nI0219 12:24:14.426615    2295 log.go:172] (0xc00072c5a0) (1) Data frame sent\nI0219 12:24:14.427004    2295 log.go:172] (0xc00014c580) (0xc00072c5a0) Stream removed, broadcasting: 1\nI0219 12:24:14.427479    2295 log.go:172] (0xc00014c580) (0xc00062cbe0) Stream removed, broadcasting: 3\nI0219 12:24:14.428108    2295 log.go:172] (0xc00014c580) (0xc00072c640) Stream removed, broadcasting: 5\nI0219 12:24:14.428154    2295 log.go:172] (0xc00014c580) (0xc00072c5a0) Stream removed, broadcasting: 1\nI0219 12:24:14.428220    2295 log.go:172] (0xc00014c580) (0xc00062cbe0) Stream removed, broadcasting: 3\nI0219 12:24:14.428244    2295 log.go:172] (0xc00014c580) (0xc00072c640) Stream removed, broadcasting: 5\n"
Feb 19 12:24:14.438: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 19 12:24:14.438: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 19 12:24:14.456: INFO: Found 1 stateful pods, waiting for 3
Feb 19 12:24:24.478: INFO: Found 2 stateful pods, waiting for 3
Feb 19 12:24:34.468: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 12:24:34.468: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 12:24:34.468: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 19 12:24:44.517: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 12:24:44.517: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 12:24:44.517: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 19 12:24:44.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 19 12:24:45.499: INFO: stderr: "I0219 12:24:44.974308    2317 log.go:172] (0xc00069a2c0) (0xc0007872c0) Create stream\nI0219 12:24:44.974528    2317 log.go:172] (0xc00069a2c0) (0xc0007872c0) Stream added, broadcasting: 1\nI0219 12:24:44.998110    2317 log.go:172] (0xc00069a2c0) Reply frame received for 1\nI0219 12:24:44.998211    2317 log.go:172] (0xc00069a2c0) (0xc000787360) Create stream\nI0219 12:24:44.998234    2317 log.go:172] (0xc00069a2c0) (0xc000787360) Stream added, broadcasting: 3\nI0219 12:24:45.000001    2317 log.go:172] (0xc00069a2c0) Reply frame received for 3\nI0219 12:24:45.000066    2317 log.go:172] (0xc00069a2c0) (0xc00036e000) Create stream\nI0219 12:24:45.000077    2317 log.go:172] (0xc00069a2c0) (0xc00036e000) Stream added, broadcasting: 5\nI0219 12:24:45.010348    2317 log.go:172] (0xc00069a2c0) Reply frame received for 5\nI0219 12:24:45.335819    2317 log.go:172] (0xc00069a2c0) Data frame received for 3\nI0219 12:24:45.336308    2317 log.go:172] (0xc000787360) (3) Data frame handling\nI0219 12:24:45.336324    2317 log.go:172] (0xc000787360) (3) Data frame sent\nI0219 12:24:45.492141    2317 log.go:172] (0xc00069a2c0) Data frame received for 1\nI0219 12:24:45.492181    2317 log.go:172] (0xc0007872c0) (1) Data frame handling\nI0219 12:24:45.492197    2317 log.go:172] (0xc0007872c0) (1) Data frame sent\nI0219 12:24:45.492297    2317 log.go:172] (0xc00069a2c0) (0xc0007872c0) Stream removed, broadcasting: 1\nI0219 12:24:45.493228    2317 log.go:172] (0xc00069a2c0) (0xc000787360) Stream removed, broadcasting: 3\nI0219 12:24:45.493369    2317 log.go:172] (0xc00069a2c0) (0xc00036e000) Stream removed, broadcasting: 5\nI0219 12:24:45.493429    2317 log.go:172] (0xc00069a2c0) Go away received\nI0219 12:24:45.493518    2317 log.go:172] (0xc00069a2c0) (0xc0007872c0) Stream removed, broadcasting: 1\nI0219 12:24:45.493567    2317 log.go:172] (0xc00069a2c0) (0xc000787360) Stream removed, broadcasting: 3\nI0219 12:24:45.493594    2317 log.go:172] (0xc00069a2c0) (0xc00036e000) Stream removed, broadcasting: 5\n"
Feb 19 12:24:45.500: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 19 12:24:45.500: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 19 12:24:45.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 19 12:24:46.049: INFO: stderr: "I0219 12:24:45.647032    2337 log.go:172] (0xc000720370) (0xc000742640) Create stream\nI0219 12:24:45.647168    2337 log.go:172] (0xc000720370) (0xc000742640) Stream added, broadcasting: 1\nI0219 12:24:45.651872    2337 log.go:172] (0xc000720370) Reply frame received for 1\nI0219 12:24:45.651896    2337 log.go:172] (0xc000720370) (0xc000688c80) Create stream\nI0219 12:24:45.651902    2337 log.go:172] (0xc000720370) (0xc000688c80) Stream added, broadcasting: 3\nI0219 12:24:45.652896    2337 log.go:172] (0xc000720370) Reply frame received for 3\nI0219 12:24:45.652915    2337 log.go:172] (0xc000720370) (0xc0007426e0) Create stream\nI0219 12:24:45.652923    2337 log.go:172] (0xc000720370) (0xc0007426e0) Stream added, broadcasting: 5\nI0219 12:24:45.653835    2337 log.go:172] (0xc000720370) Reply frame received for 5\nI0219 12:24:45.804538    2337 log.go:172] (0xc000720370) Data frame received for 3\nI0219 12:24:45.804591    2337 log.go:172] (0xc000688c80) (3) Data frame handling\nI0219 12:24:45.804611    2337 log.go:172] (0xc000688c80) (3) Data frame sent\nI0219 12:24:46.037099    2337 log.go:172] (0xc000720370) (0xc000688c80) Stream removed, broadcasting: 3\nI0219 12:24:46.037600    2337 log.go:172] (0xc000720370) Data frame received for 1\nI0219 12:24:46.037620    2337 log.go:172] (0xc000742640) (1) Data frame handling\nI0219 12:24:46.037650    2337 log.go:172] (0xc000742640) (1) Data frame sent\nI0219 12:24:46.037699    2337 log.go:172] (0xc000720370) (0xc0007426e0) Stream removed, broadcasting: 5\nI0219 12:24:46.037803    2337 log.go:172] (0xc000720370) (0xc000742640) Stream removed, broadcasting: 1\nI0219 12:24:46.037883    2337 log.go:172] (0xc000720370) Go away received\nI0219 12:24:46.038433    2337 log.go:172] (0xc000720370) (0xc000742640) Stream removed, broadcasting: 1\nI0219 12:24:46.038471    2337 log.go:172] (0xc000720370) (0xc000688c80) Stream removed, broadcasting: 3\nI0219 12:24:46.038484    2337 log.go:172] (0xc000720370) (0xc0007426e0) Stream removed, broadcasting: 5\n"
Feb 19 12:24:46.050: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 19 12:24:46.050: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 19 12:24:46.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 19 12:24:46.677: INFO: stderr: "I0219 12:24:46.288345    2360 log.go:172] (0xc00078c0b0) (0xc000755040) Create stream\nI0219 12:24:46.288585    2360 log.go:172] (0xc00078c0b0) (0xc000755040) Stream added, broadcasting: 1\nI0219 12:24:46.292308    2360 log.go:172] (0xc00078c0b0) Reply frame received for 1\nI0219 12:24:46.292338    2360 log.go:172] (0xc00078c0b0) (0xc0007550e0) Create stream\nI0219 12:24:46.292347    2360 log.go:172] (0xc00078c0b0) (0xc0007550e0) Stream added, broadcasting: 3\nI0219 12:24:46.293025    2360 log.go:172] (0xc00078c0b0) Reply frame received for 3\nI0219 12:24:46.293041    2360 log.go:172] (0xc00078c0b0) (0xc000755180) Create stream\nI0219 12:24:46.293045    2360 log.go:172] (0xc00078c0b0) (0xc000755180) Stream added, broadcasting: 5\nI0219 12:24:46.293882    2360 log.go:172] (0xc00078c0b0) Reply frame received for 5\nI0219 12:24:46.412606    2360 log.go:172] (0xc00078c0b0) Data frame received for 3\nI0219 12:24:46.412662    2360 log.go:172] (0xc0007550e0) (3) Data frame handling\nI0219 12:24:46.412683    2360 log.go:172] (0xc0007550e0) (3) Data frame sent\nI0219 12:24:46.663461    2360 log.go:172] (0xc00078c0b0) Data frame received for 1\nI0219 12:24:46.663561    2360 log.go:172] (0xc000755040) (1) Data frame handling\nI0219 12:24:46.663589    2360 log.go:172] (0xc000755040) (1) Data frame sent\nI0219 12:24:46.663612    2360 log.go:172] (0xc00078c0b0) (0xc000755040) Stream removed, broadcasting: 1\nI0219 12:24:46.664047    2360 log.go:172] (0xc00078c0b0) (0xc0007550e0) Stream removed, broadcasting: 3\nI0219 12:24:46.664370    2360 log.go:172] (0xc00078c0b0) (0xc000755180) Stream removed, broadcasting: 5\nI0219 12:24:46.664447    2360 log.go:172] (0xc00078c0b0) (0xc000755040) Stream removed, broadcasting: 1\nI0219 12:24:46.664475    2360 log.go:172] (0xc00078c0b0) (0xc0007550e0) Stream removed, broadcasting: 3\nI0219 12:24:46.664515    2360 log.go:172] (0xc00078c0b0) (0xc000755180) Stream removed, broadcasting: 5\n"
Feb 19 12:24:46.677: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 19 12:24:46.677: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 19 12:24:46.677: INFO: Waiting for statefulset status.replicas updated to 0
Feb 19 12:24:46.716: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 19 12:24:56.762: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 19 12:24:56.762: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 19 12:24:56.762: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 19 12:24:56.819: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999997948s
Feb 19 12:24:57.878: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982665215s
Feb 19 12:24:58.897: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.92270208s
Feb 19 12:24:59.924: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.903914974s
Feb 19 12:25:00.943: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.877434457s
Feb 19 12:25:01.965: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.857646953s
Feb 19 12:25:02.978: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.836032043s
Feb 19 12:25:04.066: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.823325155s
Feb 19 12:25:05.086: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.735299972s
Feb 19 12:25:06.103: INFO: Verifying statefulset ss doesn't scale past 3 for another 715.615622ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-2pwp6
Feb 19 12:25:07.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:25:07.685: INFO: stderr: "I0219 12:25:07.368351    2382 log.go:172] (0xc00071e370) (0xc0007ae640) Create stream\nI0219 12:25:07.368808    2382 log.go:172] (0xc00071e370) (0xc0007ae640) Stream added, broadcasting: 1\nI0219 12:25:07.374677    2382 log.go:172] (0xc00071e370) Reply frame received for 1\nI0219 12:25:07.374731    2382 log.go:172] (0xc00071e370) (0xc0005fac80) Create stream\nI0219 12:25:07.374755    2382 log.go:172] (0xc00071e370) (0xc0005fac80) Stream added, broadcasting: 3\nI0219 12:25:07.375843    2382 log.go:172] (0xc00071e370) Reply frame received for 3\nI0219 12:25:07.375867    2382 log.go:172] (0xc00071e370) (0xc0005ae000) Create stream\nI0219 12:25:07.375874    2382 log.go:172] (0xc00071e370) (0xc0005ae000) Stream added, broadcasting: 5\nI0219 12:25:07.376988    2382 log.go:172] (0xc00071e370) Reply frame received for 5\nI0219 12:25:07.497314    2382 log.go:172] (0xc00071e370) Data frame received for 3\nI0219 12:25:07.497389    2382 log.go:172] (0xc0005fac80) (3) Data frame handling\nI0219 12:25:07.497408    2382 log.go:172] (0xc0005fac80) (3) Data frame sent\nI0219 12:25:07.676864    2382 log.go:172] (0xc00071e370) (0xc0005fac80) Stream removed, broadcasting: 3\nI0219 12:25:07.676989    2382 log.go:172] (0xc00071e370) Data frame received for 1\nI0219 12:25:07.677019    2382 log.go:172] (0xc0007ae640) (1) Data frame handling\nI0219 12:25:07.677029    2382 log.go:172] (0xc0007ae640) (1) Data frame sent\nI0219 12:25:07.677041    2382 log.go:172] (0xc00071e370) (0xc0007ae640) Stream removed, broadcasting: 1\nI0219 12:25:07.677057    2382 log.go:172] (0xc00071e370) (0xc0005ae000) Stream removed, broadcasting: 5\nI0219 12:25:07.677069    2382 log.go:172] (0xc00071e370) Go away received\nI0219 12:25:07.677228    2382 log.go:172] (0xc00071e370) (0xc0007ae640) Stream removed, broadcasting: 1\nI0219 12:25:07.677242    2382 log.go:172] (0xc00071e370) (0xc0005fac80) Stream removed, broadcasting: 3\nI0219 12:25:07.677255    2382 log.go:172] (0xc00071e370) (0xc0005ae000) Stream removed, broadcasting: 5\n"
Feb 19 12:25:07.685: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 19 12:25:07.685: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 19 12:25:07.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:25:08.310: INFO: stderr: "I0219 12:25:07.981196    2404 log.go:172] (0xc0005e02c0) (0xc000602640) Create stream\nI0219 12:25:07.981374    2404 log.go:172] (0xc0005e02c0) (0xc000602640) Stream added, broadcasting: 1\nI0219 12:25:07.986983    2404 log.go:172] (0xc0005e02c0) Reply frame received for 1\nI0219 12:25:07.987040    2404 log.go:172] (0xc0005e02c0) (0xc0007ccbe0) Create stream\nI0219 12:25:07.987053    2404 log.go:172] (0xc0005e02c0) (0xc0007ccbe0) Stream added, broadcasting: 3\nI0219 12:25:07.988223    2404 log.go:172] (0xc0005e02c0) Reply frame received for 3\nI0219 12:25:07.988252    2404 log.go:172] (0xc0005e02c0) (0xc000542000) Create stream\nI0219 12:25:07.988258    2404 log.go:172] (0xc0005e02c0) (0xc000542000) Stream added, broadcasting: 5\nI0219 12:25:07.989774    2404 log.go:172] (0xc0005e02c0) Reply frame received for 5\nI0219 12:25:08.125392    2404 log.go:172] (0xc0005e02c0) Data frame received for 3\nI0219 12:25:08.125482    2404 log.go:172] (0xc0007ccbe0) (3) Data frame handling\nI0219 12:25:08.125543    2404 log.go:172] (0xc0007ccbe0) (3) Data frame sent\nI0219 12:25:08.299721    2404 log.go:172] (0xc0005e02c0) (0xc0007ccbe0) Stream removed, broadcasting: 3\nI0219 12:25:08.299860    2404 log.go:172] (0xc0005e02c0) Data frame received for 1\nI0219 12:25:08.299891    2404 log.go:172] (0xc000602640) (1) Data frame handling\nI0219 12:25:08.299916    2404 log.go:172] (0xc000602640) (1) Data frame sent\nI0219 12:25:08.299936    2404 log.go:172] (0xc0005e02c0) (0xc000602640) Stream removed, broadcasting: 1\nI0219 12:25:08.299985    2404 log.go:172] (0xc0005e02c0) (0xc000542000) Stream removed, broadcasting: 5\nI0219 12:25:08.300048    2404 log.go:172] (0xc0005e02c0) Go away received\nI0219 12:25:08.300337    2404 log.go:172] (0xc0005e02c0) (0xc000602640) Stream removed, broadcasting: 1\nI0219 12:25:08.300354    2404 log.go:172] (0xc0005e02c0) (0xc0007ccbe0) Stream removed, broadcasting: 3\nI0219 12:25:08.300366    2404 log.go:172] (0xc0005e02c0) (0xc000542000) Stream removed, broadcasting: 5\n"
Feb 19 12:25:08.310: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 19 12:25:08.311: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 19 12:25:08.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:25:09.148: INFO: rc: 126
Feb 19 12:25:09.149: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "process_linux.go:91: executing setns process caused \"signal: broken pipe\"": unknown
 I0219 12:25:08.629568    2427 log.go:172] (0xc00015c790) (0xc0005cf360) Create stream
I0219 12:25:08.629930    2427 log.go:172] (0xc00015c790) (0xc0005cf360) Stream added, broadcasting: 1
I0219 12:25:08.749303    2427 log.go:172] (0xc00015c790) Reply frame received for 1
I0219 12:25:08.749427    2427 log.go:172] (0xc00015c790) (0xc000728000) Create stream
I0219 12:25:08.749446    2427 log.go:172] (0xc00015c790) (0xc000728000) Stream added, broadcasting: 3
I0219 12:25:08.752302    2427 log.go:172] (0xc00015c790) Reply frame received for 3
I0219 12:25:08.752362    2427 log.go:172] (0xc00015c790) (0xc000418000) Create stream
I0219 12:25:08.752382    2427 log.go:172] (0xc00015c790) (0xc000418000) Stream added, broadcasting: 5
I0219 12:25:08.755676    2427 log.go:172] (0xc00015c790) Reply frame received for 5
I0219 12:25:09.141492    2427 log.go:172] (0xc00015c790) Data frame received for 3
I0219 12:25:09.141571    2427 log.go:172] (0xc000728000) (3) Data frame handling
I0219 12:25:09.141602    2427 log.go:172] (0xc000728000) (3) Data frame sent
I0219 12:25:09.143210    2427 log.go:172] (0xc00015c790) (0xc000728000) Stream removed, broadcasting: 3
I0219 12:25:09.143280    2427 log.go:172] (0xc00015c790) Data frame received for 1
I0219 12:25:09.143299    2427 log.go:172] (0xc0005cf360) (1) Data frame handling
I0219 12:25:09.143317    2427 log.go:172] (0xc0005cf360) (1) Data frame sent
I0219 12:25:09.143327    2427 log.go:172] (0xc00015c790) (0xc0005cf360) Stream removed, broadcasting: 1
I0219 12:25:09.143348    2427 log.go:172] (0xc00015c790) (0xc000418000) Stream removed, broadcasting: 5
I0219 12:25:09.143520    2427 log.go:172] (0xc00015c790) Go away received
I0219 12:25:09.143723    2427 log.go:172] (0xc00015c790) (0xc0005cf360) Stream removed, broadcasting: 1
I0219 12:25:09.143849    2427 log.go:172] (0xc00015c790) (0xc000728000) Stream removed, broadcasting: 3
I0219 12:25:09.143887    2427 log.go:172] (0xc00015c790) (0xc000418000) Stream removed, broadcasting: 5
command terminated with exit code 126
 []  0xc001237890 exit status 126   true [0xc0000105f0 0xc0000106f0 0xc0000107e0] [0xc0000105f0 0xc0000106f0 0xc0000107e0] [0xc000010670 0xc000010790] [0x935700 0x935700] 0xc001b80fc0 }:
Command stdout:
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "process_linux.go:91: executing setns process caused \"signal: broken pipe\"": unknown

stderr:
I0219 12:25:08.629568    2427 log.go:172] (0xc00015c790) (0xc0005cf360) Create stream
I0219 12:25:08.629930    2427 log.go:172] (0xc00015c790) (0xc0005cf360) Stream added, broadcasting: 1
I0219 12:25:08.749303    2427 log.go:172] (0xc00015c790) Reply frame received for 1
I0219 12:25:08.749427    2427 log.go:172] (0xc00015c790) (0xc000728000) Create stream
I0219 12:25:08.749446    2427 log.go:172] (0xc00015c790) (0xc000728000) Stream added, broadcasting: 3
I0219 12:25:08.752302    2427 log.go:172] (0xc00015c790) Reply frame received for 3
I0219 12:25:08.752362    2427 log.go:172] (0xc00015c790) (0xc000418000) Create stream
I0219 12:25:08.752382    2427 log.go:172] (0xc00015c790) (0xc000418000) Stream added, broadcasting: 5
I0219 12:25:08.755676    2427 log.go:172] (0xc00015c790) Reply frame received for 5
I0219 12:25:09.141492    2427 log.go:172] (0xc00015c790) Data frame received for 3
I0219 12:25:09.141571    2427 log.go:172] (0xc000728000) (3) Data frame handling
I0219 12:25:09.141602    2427 log.go:172] (0xc000728000) (3) Data frame sent
I0219 12:25:09.143210    2427 log.go:172] (0xc00015c790) (0xc000728000) Stream removed, broadcasting: 3
I0219 12:25:09.143280    2427 log.go:172] (0xc00015c790) Data frame received for 1
I0219 12:25:09.143299    2427 log.go:172] (0xc0005cf360) (1) Data frame handling
I0219 12:25:09.143317    2427 log.go:172] (0xc0005cf360) (1) Data frame sent
I0219 12:25:09.143327    2427 log.go:172] (0xc00015c790) (0xc0005cf360) Stream removed, broadcasting: 1
I0219 12:25:09.143348    2427 log.go:172] (0xc00015c790) (0xc000418000) Stream removed, broadcasting: 5
I0219 12:25:09.143520    2427 log.go:172] (0xc00015c790) Go away received
I0219 12:25:09.143723    2427 log.go:172] (0xc00015c790) (0xc0005cf360) Stream removed, broadcasting: 1
I0219 12:25:09.143849    2427 log.go:172] (0xc00015c790) (0xc000728000) Stream removed, broadcasting: 3
I0219 12:25:09.143887    2427 log.go:172] (0xc00015c790) (0xc000418000) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126

Feb 19 12:25:19.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:25:19.291: INFO: rc: 1
Feb 19 12:25:19.292: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016d5a40 exit status 1   true [0xc002032170 0xc002032188 0xc0020321a0] [0xc002032170 0xc002032188 0xc0020321a0] [0xc002032180 0xc002032198] [0x935700 0x935700] 0xc001332fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:25:29.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:25:29.485: INFO: rc: 1
Feb 19 12:25:29.486: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001237a10 exit status 1   true [0xc000010830 0xc000010860 0xc000010910] [0xc000010830 0xc000010860 0xc000010910] [0xc000010850 0xc0000108f8] [0x935700 0x935700] 0xc001c0cc60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:25:39.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:25:39.673: INFO: rc: 1
Feb 19 12:25:39.674: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a0da10 exit status 1   true [0xc000ad40d8 0xc000ad40f0 0xc000ad4108] [0xc000ad40d8 0xc000ad40f0 0xc000ad4108] [0xc000ad40e8 0xc000ad4100] [0x935700 0x935700] 0xc001d0d980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:25:49.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:25:49.906: INFO: rc: 1
Feb 19 12:25:49.907: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a0dbf0 exit status 1   true [0xc000ad4110 0xc000ad4128 0xc000ad4140] [0xc000ad4110 0xc000ad4128 0xc000ad4140] [0xc000ad4120 0xc000ad4138] [0x935700 0x935700] 0xc001d0df20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:25:59.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:26:00.070: INFO: rc: 1
Feb 19 12:26:00.070: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a0dd10 exit status 1   true [0xc000ad4148 0xc000ad4160 0xc000ad4178] [0xc000ad4148 0xc000ad4160 0xc000ad4178] [0xc000ad4158 0xc000ad4170] [0x935700 0x935700] 0xc0019c41e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:26:10.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:26:10.239: INFO: rc: 1
Feb 19 12:26:10.240: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a0de60 exit status 1   true [0xc000ad4180 0xc000ad4198 0xc000ad41b0] [0xc000ad4180 0xc000ad4198 0xc000ad41b0] [0xc000ad4190 0xc000ad41a8] [0x935700 0x935700] 0xc0019c45a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:26:20.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:26:20.381: INFO: rc: 1
Feb 19 12:26:20.381: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a0dfb0 exit status 1   true [0xc000ad41b8 0xc000ad41d0 0xc000ad41e8] [0xc000ad41b8 0xc000ad41d0 0xc000ad41e8] [0xc000ad41c8 0xc000ad41e0] [0x935700 0x935700] 0xc0019c49c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:26:30.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:26:30.597: INFO: rc: 1
Feb 19 12:26:30.597: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001237bc0 exit status 1   true [0xc000010920 0xc0000109b0 0xc0000109d8] [0xc000010920 0xc0000109b0 0xc0000109d8] [0xc000010968 0xc0000109c8] [0x935700 0x935700] 0xc001c0d380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:26:40.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:26:40.743: INFO: rc: 1
Feb 19 12:26:40.743: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00094e240 exit status 1   true [0xc000ad41f0 0xc000ad4210 0xc000ad4228] [0xc000ad41f0 0xc000ad4210 0xc000ad4228] [0xc000ad4208 0xc000ad4220] [0x935700 0x935700] 0xc0019c4d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:26:50.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:26:50.876: INFO: rc: 1
Feb 19 12:26:50.877: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016d5bf0 exit status 1   true [0xc0020321a8 0xc0020321c0 0xc0020321d8] [0xc0020321a8 0xc0020321c0 0xc0020321d8] [0xc0020321b8 0xc0020321d0] [0x935700 0x935700] 0xc001333440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:27:00.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:27:01.560: INFO: rc: 1
Feb 19 12:27:01.560: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a0c120 exit status 1   true [0xc00017c140 0xc0000100f8 0xc000010280] [0xc00017c140 0xc0000100f8 0xc000010280] [0xc0000100a0 0xc0000101d0] [0x935700 0x935700] 0xc001b80240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:27:11.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:27:11.716: INFO: rc: 1
Feb 19 12:27:11.717: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001566210 exit status 1   true [0xc000ad4000 0xc000ad4018 0xc000ad4030] [0xc000ad4000 0xc000ad4018 0xc000ad4030] [0xc000ad4010 0xc000ad4028] [0x935700 0x935700] 0xc001d0c600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:27:21.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:27:21.891: INFO: rc: 1
Feb 19 12:27:21.891: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a0c2d0 exit status 1   true [0xc000010320 0xc0000103a0 0xc0000104f0] [0xc000010320 0xc0000103a0 0xc0000104f0] [0xc000010380 0xc000010480] [0x935700 0x935700] 0xc001b80ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:27:31.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:27:32.304: INFO: rc: 1
Feb 19 12:27:32.305: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001236120 exit status 1   true [0xc002032000 0xc002032018 0xc002032030] [0xc002032000 0xc002032018 0xc002032030] [0xc002032010 0xc002032028] [0x935700 0x935700] 0xc0020d9ec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:27:42.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:27:42.523: INFO: rc: 1
Feb 19 12:27:42.524: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015664b0 exit status 1   true [0xc000ad4038 0xc000ad4050 0xc000ad4078] [0xc000ad4038 0xc000ad4050 0xc000ad4078] [0xc000ad4048 0xc000ad4070] [0x935700 0x935700] 0xc001d0d4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:27:52.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:27:52.674: INFO: rc: 1
Feb 19 12:27:52.674: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001566630 exit status 1   true [0xc000ad4080 0xc000ad4098 0xc000ad40b0] [0xc000ad4080 0xc000ad4098 0xc000ad40b0] [0xc000ad4090 0xc000ad40a8] [0x935700 0x935700] 0xc001d0db00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:28:02.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:28:02.773: INFO: rc: 1
Feb 19 12:28:02.773: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00094e2a0 exit status 1   true [0xc00000e288 0xc00000eca8 0xc00000ed78] [0xc00000e288 0xc00000eca8 0xc00000ed78] [0xc00000ec88 0xc00000ed70] [0x935700 0x935700] 0xc001f301e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:28:12.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:28:12.912: INFO: rc: 1
Feb 19 12:28:12.912: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00094e3f0 exit status 1   true [0xc00000ed98 0xc00000eea0 0xc00000ef58] [0xc00000ed98 0xc00000eea0 0xc00000ef58] [0xc00000ee48 0xc00000eee8] [0x935700 0x935700] 0xc001f308a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:28:22.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:28:23.059: INFO: rc: 1
Feb 19 12:28:23.059: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00094e570 exit status 1   true [0xc00000efc0 0xc00000f0b8 0xc00000f150] [0xc00000efc0 0xc00000f0b8 0xc00000f150] [0xc00000f058 0xc00000f148] [0x935700 0x935700] 0xc001c0c900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:28:33.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:28:33.176: INFO: rc: 1
Feb 19 12:28:33.176: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001236270 exit status 1   true [0xc002032038 0xc002032050 0xc002032068] [0xc002032038 0xc002032050 0xc002032068] [0xc002032048 0xc002032060] [0x935700 0x935700] 0xc0019c41e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:28:43.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:28:43.357: INFO: rc: 1
Feb 19 12:28:43.357: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0012363c0 exit status 1   true [0xc002032070 0xc002032088 0xc0020320a0] [0xc002032070 0xc002032088 0xc0020320a0] [0xc002032080 0xc002032098] [0x935700 0x935700] 0xc0019c45a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:28:53.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:28:53.550: INFO: rc: 1
Feb 19 12:28:53.550: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00094e690 exit status 1   true [0xc00000f158 0xc00000f198 0xc00000f200] [0xc00000f158 0xc00000f198 0xc00000f200] [0xc00000f170 0xc00000f1e8] [0x935700 0x935700] 0xc001c0d140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:29:03.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:29:03.713: INFO: rc: 1
Feb 19 12:29:03.714: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a0c1b0 exit status 1   true [0xc00017c140 0xc0000100f8 0xc000010280] [0xc00017c140 0xc0000100f8 0xc000010280] [0xc0000100a0 0xc0000101d0] [0x935700 0x935700] 0xc001f301e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:29:13.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:29:13.890: INFO: rc: 1
Feb 19 12:29:13.891: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00094e270 exit status 1   true [0xc000ad4000 0xc000ad4018 0xc000ad4030] [0xc000ad4000 0xc000ad4018 0xc000ad4030] [0xc000ad4010 0xc000ad4028] [0x935700 0x935700] 0xc0020d9ec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:29:23.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:29:24.042: INFO: rc: 1
Feb 19 12:29:24.043: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00094e3c0 exit status 1   true [0xc000ad4038 0xc000ad4050 0xc000ad4078] [0xc000ad4038 0xc000ad4050 0xc000ad4078] [0xc000ad4048 0xc000ad4070] [0x935700 0x935700] 0xc001b80240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:29:34.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:29:34.121: INFO: rc: 1
Feb 19 12:29:34.122: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00094e5a0 exit status 1   true [0xc000ad4080 0xc000ad4098 0xc000ad40b0] [0xc000ad4080 0xc000ad4098 0xc000ad40b0] [0xc000ad4090 0xc000ad40a8] [0x935700 0x935700] 0xc001b80ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:29:44.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:29:44.276: INFO: rc: 1
Feb 19 12:29:44.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00094e6f0 exit status 1   true [0xc000ad40b8 0xc000ad40d0 0xc000ad40e8] [0xc000ad40b8 0xc000ad40d0 0xc000ad40e8] [0xc000ad40c8 0xc000ad40e0] [0x935700 0x935700] 0xc001b80d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:29:54.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:29:54.418: INFO: rc: 1
Feb 19 12:29:54.419: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015662d0 exit status 1   true [0xc00000e288 0xc00000eca8 0xc00000ed78] [0xc00000e288 0xc00000eca8 0xc00000ed78] [0xc00000ec88 0xc00000ed70] [0x935700 0x935700] 0xc001d0c600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:30:04.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:30:04.608: INFO: rc: 1
Feb 19 12:30:04.609: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001566660 exit status 1   true [0xc00000ed98 0xc00000eea0 0xc00000ef58] [0xc00000ed98 0xc00000eea0 0xc00000ef58] [0xc00000ee48 0xc00000eee8] [0x935700 0x935700] 0xc001d0d4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 19 12:30:14.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2pwp6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 19 12:30:14.723: INFO: rc: 1
Feb 19 12:30:14.723: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Feb 19 12:30:14.723: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 19 12:30:14.745: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2pwp6
Feb 19 12:30:14.753: INFO: Scaling statefulset ss to 0
Feb 19 12:30:14.785: INFO: Waiting for statefulset status.replicas updated to 0
Feb 19 12:30:14.789: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:30:14.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-2pwp6" for this suite.
Feb 19 12:30:22.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:30:23.032: INFO: namespace: e2e-tests-statefulset-2pwp6, resource: bindings, ignored listing per whitelist
Feb 19 12:30:23.055: INFO: namespace e2e-tests-statefulset-2pwp6 deletion completed in 8.203038518s

• [SLOW TEST:400.695 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:30:23.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 19 12:30:23.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-kf295'
Feb 19 12:30:26.051: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 19 12:30:26.051: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Feb 19 12:30:26.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-kf295'
Feb 19 12:30:26.459: INFO: stderr: ""
Feb 19 12:30:26.459: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:30:26.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kf295" for this suite.
Feb 19 12:30:50.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:30:50.725: INFO: namespace: e2e-tests-kubectl-kf295, resource: bindings, ignored listing per whitelist
Feb 19 12:30:50.808: INFO: namespace e2e-tests-kubectl-kf295 deletion completed in 24.299067744s

• [SLOW TEST:27.752 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:30:50.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 19 12:30:51.023: INFO: Waiting up to 5m0s for pod "pod-aa048ac2-5313-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-wm9g4" to be "success or failure"
Feb 19 12:30:51.029: INFO: Pod "pod-aa048ac2-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119503ms
Feb 19 12:30:53.053: INFO: Pod "pod-aa048ac2-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02937862s
Feb 19 12:30:55.064: INFO: Pod "pod-aa048ac2-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041276341s
Feb 19 12:30:57.290: INFO: Pod "pod-aa048ac2-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267209704s
Feb 19 12:30:59.308: INFO: Pod "pod-aa048ac2-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28467501s
Feb 19 12:31:01.327: INFO: Pod "pod-aa048ac2-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.304248614s
Feb 19 12:31:03.370: INFO: Pod "pod-aa048ac2-5313-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.347247288s
STEP: Saw pod success
Feb 19 12:31:03.371: INFO: Pod "pod-aa048ac2-5313-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:31:03.405: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-aa048ac2-5313-11ea-a0a3-0242ac110008 container test-container: 
STEP: delete the pod
Feb 19 12:31:04.767: INFO: Waiting for pod pod-aa048ac2-5313-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:31:05.058: INFO: Pod pod-aa048ac2-5313-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:31:05.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wm9g4" for this suite.
Feb 19 12:31:11.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:31:11.334: INFO: namespace: e2e-tests-emptydir-wm9g4, resource: bindings, ignored listing per whitelist
Feb 19 12:31:11.421: INFO: namespace e2e-tests-emptydir-wm9g4 deletion completed in 6.345293398s

• [SLOW TEST:20.613 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:31:11.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 19 12:31:11.667: INFO: Waiting up to 5m0s for pod "pod-b6521816-5313-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-psjl5" to be "success or failure"
Feb 19 12:31:11.677: INFO: Pod "pod-b6521816-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.430835ms
Feb 19 12:31:13.865: INFO: Pod "pod-b6521816-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19789245s
Feb 19 12:31:15.935: INFO: Pod "pod-b6521816-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268477814s
Feb 19 12:31:17.995: INFO: Pod "pod-b6521816-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.327584045s
Feb 19 12:31:20.033: INFO: Pod "pod-b6521816-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.365665559s
Feb 19 12:31:22.044: INFO: Pod "pod-b6521816-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.377414989s
Feb 19 12:31:24.079: INFO: Pod "pod-b6521816-5313-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.411889694s
STEP: Saw pod success
Feb 19 12:31:24.079: INFO: Pod "pod-b6521816-5313-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:31:24.743: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b6521816-5313-11ea-a0a3-0242ac110008 container test-container: 
STEP: delete the pod
Feb 19 12:31:24.987: INFO: Waiting for pod pod-b6521816-5313-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:31:24.995: INFO: Pod pod-b6521816-5313-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:31:24.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-psjl5" for this suite.
Feb 19 12:31:33.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:31:33.207: INFO: namespace: e2e-tests-emptydir-psjl5, resource: bindings, ignored listing per whitelist
Feb 19 12:31:33.295: INFO: namespace e2e-tests-emptydir-psjl5 deletion completed in 8.294143167s

• [SLOW TEST:21.872 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:31:33.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 19 12:31:33.551: INFO: Waiting up to 5m0s for pod "pod-c35c7f66-5313-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-5vr2v" to be "success or failure"
Feb 19 12:31:33.557: INFO: Pod "pod-c35c7f66-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588153ms
Feb 19 12:31:35.701: INFO: Pod "pod-c35c7f66-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150068517s
Feb 19 12:31:37.745: INFO: Pod "pod-c35c7f66-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194576228s
Feb 19 12:31:40.635: INFO: Pod "pod-c35c7f66-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.084513903s
Feb 19 12:31:42.663: INFO: Pod "pod-c35c7f66-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.112172876s
Feb 19 12:31:44.693: INFO: Pod "pod-c35c7f66-5313-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.14220701s
STEP: Saw pod success
Feb 19 12:31:44.693: INFO: Pod "pod-c35c7f66-5313-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:31:44.706: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c35c7f66-5313-11ea-a0a3-0242ac110008 container test-container: 
STEP: delete the pod
Feb 19 12:31:44.971: INFO: Waiting for pod pod-c35c7f66-5313-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:31:45.037: INFO: Pod pod-c35c7f66-5313-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:31:45.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5vr2v" for this suite.
Feb 19 12:31:51.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:31:51.216: INFO: namespace: e2e-tests-emptydir-5vr2v, resource: bindings, ignored listing per whitelist
Feb 19 12:31:51.340: INFO: namespace e2e-tests-emptydir-5vr2v deletion completed in 6.284375762s

• [SLOW TEST:18.044 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:31:51.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 19 12:31:51.579: INFO: Waiting up to 5m0s for pod "pod-ce1c2c8d-5313-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-27bzt" to be "success or failure"
Feb 19 12:31:51.603: INFO: Pod "pod-ce1c2c8d-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 24.267845ms
Feb 19 12:31:53.848: INFO: Pod "pod-ce1c2c8d-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268766815s
Feb 19 12:31:55.870: INFO: Pod "pod-ce1c2c8d-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291098241s
Feb 19 12:31:58.143: INFO: Pod "pod-ce1c2c8d-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.564095116s
Feb 19 12:32:00.155: INFO: Pod "pod-ce1c2c8d-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.576544214s
Feb 19 12:32:02.245: INFO: Pod "pod-ce1c2c8d-5313-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.665895941s
Feb 19 12:32:04.261: INFO: Pod "pod-ce1c2c8d-5313-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.681738505s
STEP: Saw pod success
Feb 19 12:32:04.261: INFO: Pod "pod-ce1c2c8d-5313-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:32:04.266: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ce1c2c8d-5313-11ea-a0a3-0242ac110008 container test-container: 
STEP: delete the pod
Feb 19 12:32:04.452: INFO: Waiting for pod pod-ce1c2c8d-5313-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:32:04.463: INFO: Pod pod-ce1c2c8d-5313-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:32:04.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-27bzt" for this suite.
Feb 19 12:32:10.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:32:10.723: INFO: namespace: e2e-tests-emptydir-27bzt, resource: bindings, ignored listing per whitelist
Feb 19 12:32:10.785: INFO: namespace e2e-tests-emptydir-27bzt deletion completed in 6.304382204s

• [SLOW TEST:19.445 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:32:10.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:32:21.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-gkqr4" for this suite.
Feb 19 12:33:05.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:33:05.277: INFO: namespace: e2e-tests-kubelet-test-gkqr4, resource: bindings, ignored listing per whitelist
Feb 19 12:33:05.410: INFO: namespace e2e-tests-kubelet-test-gkqr4 deletion completed in 44.290457118s

• [SLOW TEST:54.625 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:33:05.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-nqvs9
Feb 19 12:33:15.857: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-nqvs9
STEP: checking the pod's current state and verifying that restartCount is present
Feb 19 12:33:15.869: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:37:16.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-nqvs9" for this suite.
Feb 19 12:37:22.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:37:22.378: INFO: namespace: e2e-tests-container-probe-nqvs9, resource: bindings, ignored listing per whitelist
Feb 19 12:37:22.568: INFO: namespace e2e-tests-container-probe-nqvs9 deletion completed in 6.378149667s

• [SLOW TEST:257.158 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:37:22.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-h9zzp
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 19 12:37:24.153: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 19 12:37:56.591: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-h9zzp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 19 12:37:56.591: INFO: >>> kubeConfig: /root/.kube/config
I0219 12:37:56.672129       8 log.go:172] (0xc0000ebe40) (0xc000f2a280) Create stream
I0219 12:37:56.672165       8 log.go:172] (0xc0000ebe40) (0xc000f2a280) Stream added, broadcasting: 1
I0219 12:37:56.678768       8 log.go:172] (0xc0000ebe40) Reply frame received for 1
I0219 12:37:56.678800       8 log.go:172] (0xc0000ebe40) (0xc0027bcaa0) Create stream
I0219 12:37:56.678814       8 log.go:172] (0xc0000ebe40) (0xc0027bcaa0) Stream added, broadcasting: 3
I0219 12:37:56.680181       8 log.go:172] (0xc0000ebe40) Reply frame received for 3
I0219 12:37:56.680201       8 log.go:172] (0xc0000ebe40) (0xc0027bcb40) Create stream
I0219 12:37:56.680209       8 log.go:172] (0xc0000ebe40) (0xc0027bcb40) Stream added, broadcasting: 5
I0219 12:37:56.681766       8 log.go:172] (0xc0000ebe40) Reply frame received for 5
I0219 12:37:57.841192       8 log.go:172] (0xc0000ebe40) Data frame received for 3
I0219 12:37:57.841242       8 log.go:172] (0xc0027bcaa0) (3) Data frame handling
I0219 12:37:57.841271       8 log.go:172] (0xc0027bcaa0) (3) Data frame sent
I0219 12:37:58.007657       8 log.go:172] (0xc0000ebe40) (0xc0027bcaa0) Stream removed, broadcasting: 3
I0219 12:37:58.007787       8 log.go:172] (0xc0000ebe40) Data frame received for 1
I0219 12:37:58.007820       8 log.go:172] (0xc000f2a280) (1) Data frame handling
I0219 12:37:58.007852       8 log.go:172] (0xc000f2a280) (1) Data frame sent
I0219 12:37:58.007896       8 log.go:172] (0xc0000ebe40) (0xc0027bcb40) Stream removed, broadcasting: 5
I0219 12:37:58.007992       8 log.go:172] (0xc0000ebe40) (0xc000f2a280) Stream removed, broadcasting: 1
I0219 12:37:58.008116       8 log.go:172] (0xc0000ebe40) Go away received
I0219 12:37:58.009545       8 log.go:172] (0xc0000ebe40) (0xc000f2a280) Stream removed, broadcasting: 1
I0219 12:37:58.009679       8 log.go:172] (0xc0000ebe40) (0xc0027bcaa0) Stream removed, broadcasting: 3
I0219 12:37:58.009708       8 log.go:172] (0xc0000ebe40) (0xc0027bcb40) Stream removed, broadcasting: 5
Feb 19 12:37:58.009: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:37:58.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-h9zzp" for this suite.
Feb 19 12:38:22.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:38:22.227: INFO: namespace: e2e-tests-pod-network-test-h9zzp, resource: bindings, ignored listing per whitelist
Feb 19 12:38:22.279: INFO: namespace e2e-tests-pod-network-test-h9zzp deletion completed in 24.250966138s

• [SLOW TEST:59.710 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:38:22.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-b75cb41b-5314-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 19 12:38:22.942: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b75f8048-5314-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-rpvkv" to be "success or failure"
Feb 19 12:38:22.971: INFO: Pod "pod-projected-secrets-b75f8048-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 28.686074ms
Feb 19 12:38:25.064: INFO: Pod "pod-projected-secrets-b75f8048-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121849099s
Feb 19 12:38:27.078: INFO: Pod "pod-projected-secrets-b75f8048-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136307513s
Feb 19 12:38:29.412: INFO: Pod "pod-projected-secrets-b75f8048-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.469850009s
Feb 19 12:38:31.459: INFO: Pod "pod-projected-secrets-b75f8048-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.516956245s
Feb 19 12:38:33.471: INFO: Pod "pod-projected-secrets-b75f8048-5314-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.528674158s
STEP: Saw pod success
Feb 19 12:38:33.471: INFO: Pod "pod-projected-secrets-b75f8048-5314-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:38:33.476: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-b75f8048-5314-11ea-a0a3-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 19 12:38:35.404: INFO: Waiting for pod pod-projected-secrets-b75f8048-5314-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:38:35.443: INFO: Pod pod-projected-secrets-b75f8048-5314-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:38:35.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rpvkv" for this suite.
Feb 19 12:38:41.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:38:41.916: INFO: namespace: e2e-tests-projected-rpvkv, resource: bindings, ignored listing per whitelist
Feb 19 12:38:41.948: INFO: namespace e2e-tests-projected-rpvkv deletion completed in 6.465655944s

• [SLOW TEST:19.668 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:38:41.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:38:52.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-8snfk" for this suite.
Feb 19 12:39:47.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:39:47.105: INFO: namespace: e2e-tests-kubelet-test-8snfk, resource: bindings, ignored listing per whitelist
Feb 19 12:39:47.282: INFO: namespace e2e-tests-kubelet-test-8snfk deletion completed in 54.57534949s

• [SLOW TEST:65.334 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:39:47.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb 19 12:39:47.580: INFO: Waiting up to 5m0s for pod "var-expansion-e9d246bc-5314-11ea-a0a3-0242ac110008" in namespace "e2e-tests-var-expansion-5jkrn" to be "success or failure"
Feb 19 12:39:47.699: INFO: Pod "var-expansion-e9d246bc-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 119.14469ms
Feb 19 12:39:49.836: INFO: Pod "var-expansion-e9d246bc-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256060515s
Feb 19 12:39:51.857: INFO: Pod "var-expansion-e9d246bc-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.276597581s
Feb 19 12:39:54.565: INFO: Pod "var-expansion-e9d246bc-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.984728266s
Feb 19 12:39:56.597: INFO: Pod "var-expansion-e9d246bc-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.016523496s
Feb 19 12:39:58.635: INFO: Pod "var-expansion-e9d246bc-5314-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.05489402s
STEP: Saw pod success
Feb 19 12:39:58.635: INFO: Pod "var-expansion-e9d246bc-5314-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:39:58.684: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-e9d246bc-5314-11ea-a0a3-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 19 12:39:58.887: INFO: Waiting for pod var-expansion-e9d246bc-5314-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:39:58.904: INFO: Pod var-expansion-e9d246bc-5314-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:39:58.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-5jkrn" for this suite.
Feb 19 12:40:05.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:40:05.287: INFO: namespace: e2e-tests-var-expansion-5jkrn, resource: bindings, ignored listing per whitelist
Feb 19 12:40:05.306: INFO: namespace e2e-tests-var-expansion-5jkrn deletion completed in 6.385694535s

• [SLOW TEST:18.023 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:40:05.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 19 12:40:05.440: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f47a3641-5314-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-4wxjk" to be "success or failure"
Feb 19 12:40:05.451: INFO: Pod "downwardapi-volume-f47a3641-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.07486ms
Feb 19 12:40:07.606: INFO: Pod "downwardapi-volume-f47a3641-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165835333s
Feb 19 12:40:09.618: INFO: Pod "downwardapi-volume-f47a3641-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177660192s
Feb 19 12:40:12.476: INFO: Pod "downwardapi-volume-f47a3641-5314-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.036140039s
Feb 19 12:40:14.503: INFO: Pod "downwardapi-volume-f47a3641-5314-11ea-a0a3-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 9.063131295s
Feb 19 12:40:16.540: INFO: Pod "downwardapi-volume-f47a3641-5314-11ea-a0a3-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 11.100134363s
Feb 19 12:40:18.928: INFO: Pod "downwardapi-volume-f47a3641-5314-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.488167888s
STEP: Saw pod success
Feb 19 12:40:18.929: INFO: Pod "downwardapi-volume-f47a3641-5314-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:40:18.950: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f47a3641-5314-11ea-a0a3-0242ac110008 container client-container: 
STEP: delete the pod
Feb 19 12:40:19.412: INFO: Waiting for pod downwardapi-volume-f47a3641-5314-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:40:19.492: INFO: Pod downwardapi-volume-f47a3641-5314-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:40:19.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4wxjk" for this suite.
Feb 19 12:40:25.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:40:25.625: INFO: namespace: e2e-tests-projected-4wxjk, resource: bindings, ignored listing per whitelist
Feb 19 12:40:25.761: INFO: namespace e2e-tests-projected-4wxjk deletion completed in 6.258856658s

• [SLOW TEST:20.454 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:40:25.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-00bfdfba-5315-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 19 12:40:26.042: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-00c12832-5315-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-7gj2v" to be "success or failure"
Feb 19 12:40:26.048: INFO: Pod "pod-projected-secrets-00c12832-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.729201ms
Feb 19 12:40:28.133: INFO: Pod "pod-projected-secrets-00c12832-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091314308s
Feb 19 12:40:30.161: INFO: Pod "pod-projected-secrets-00c12832-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118695951s
Feb 19 12:40:32.386: INFO: Pod "pod-projected-secrets-00c12832-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343366842s
Feb 19 12:40:34.406: INFO: Pod "pod-projected-secrets-00c12832-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.364125536s
Feb 19 12:40:36.468: INFO: Pod "pod-projected-secrets-00c12832-5315-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.426099321s
STEP: Saw pod success
Feb 19 12:40:36.469: INFO: Pod "pod-projected-secrets-00c12832-5315-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:40:36.495: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-00c12832-5315-11ea-a0a3-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 19 12:40:36.910: INFO: Waiting for pod pod-projected-secrets-00c12832-5315-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:40:36.923: INFO: Pod pod-projected-secrets-00c12832-5315-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:40:36.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7gj2v" for this suite.
Feb 19 12:40:42.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:40:43.017: INFO: namespace: e2e-tests-projected-7gj2v, resource: bindings, ignored listing per whitelist
Feb 19 12:40:43.086: INFO: namespace e2e-tests-projected-7gj2v deletion completed in 6.146254208s

• [SLOW TEST:17.325 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:40:43.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 19 12:40:43.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-klspj'
Feb 19 12:40:45.373: INFO: stderr: ""
Feb 19 12:40:45.373: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Feb 19 12:40:45.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-klspj'
Feb 19 12:40:50.228: INFO: stderr: ""
Feb 19 12:40:50.229: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:40:50.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-klspj" for this suite.
Feb 19 12:40:56.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:40:56.415: INFO: namespace: e2e-tests-kubectl-klspj, resource: bindings, ignored listing per whitelist
Feb 19 12:40:56.637: INFO: namespace e2e-tests-kubectl-klspj deletion completed in 6.395397483s

• [SLOW TEST:13.551 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:40:56.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 19 12:40:56.971: INFO: Waiting up to 5m0s for pod "pod-132f96ec-5315-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-w778m" to be "success or failure"
Feb 19 12:40:57.042: INFO: Pod "pod-132f96ec-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 71.217789ms
Feb 19 12:40:59.202: INFO: Pod "pod-132f96ec-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230500936s
Feb 19 12:41:01.220: INFO: Pod "pod-132f96ec-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248812491s
Feb 19 12:41:03.489: INFO: Pod "pod-132f96ec-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.51740963s
Feb 19 12:41:06.075: INFO: Pod "pod-132f96ec-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.103890513s
Feb 19 12:41:08.161: INFO: Pod "pod-132f96ec-5315-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.189907379s
STEP: Saw pod success
Feb 19 12:41:08.161: INFO: Pod "pod-132f96ec-5315-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:41:08.171: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-132f96ec-5315-11ea-a0a3-0242ac110008 container test-container: 
STEP: delete the pod
Feb 19 12:41:08.816: INFO: Waiting for pod pod-132f96ec-5315-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:41:08.851: INFO: Pod pod-132f96ec-5315-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:41:08.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-w778m" for this suite.
Feb 19 12:41:15.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:41:15.202: INFO: namespace: e2e-tests-emptydir-w778m, resource: bindings, ignored listing per whitelist
Feb 19 12:41:15.206: INFO: namespace e2e-tests-emptydir-w778m deletion completed in 6.303255492s

• [SLOW TEST:18.568 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:41:15.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 19 12:41:15.559: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1e2e3dd0-5315-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001cd0a02), BlockOwnerDeletion:(*bool)(0xc001cd0a03)}}
Feb 19 12:41:15.583: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"1e2b94a8-5315-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001f7d912), BlockOwnerDeletion:(*bool)(0xc001f7d913)}}
Feb 19 12:41:15.741: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1e2cb165-5315-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001f7db22), BlockOwnerDeletion:(*bool)(0xc001f7db23)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:41:20.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-j2d5p" for this suite.
Feb 19 12:41:28.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:41:28.980: INFO: namespace: e2e-tests-gc-j2d5p, resource: bindings, ignored listing per whitelist
Feb 19 12:41:29.138: INFO: namespace e2e-tests-gc-j2d5p deletion completed in 8.262099954s

• [SLOW TEST:13.932 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:41:29.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-2727614d-5315-11ea-a0a3-0242ac110008
STEP: Creating secret with name s-test-opt-upd-272762c6-5315-11ea-a0a3-0242ac110008
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-2727614d-5315-11ea-a0a3-0242ac110008
STEP: Updating secret s-test-opt-upd-272762c6-5315-11ea-a0a3-0242ac110008
STEP: Creating secret with name s-test-opt-create-27276344-5315-11ea-a0a3-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:41:48.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lj7lx" for this suite.
Feb 19 12:42:12.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:42:13.175: INFO: namespace: e2e-tests-projected-lj7lx, resource: bindings, ignored listing per whitelist
Feb 19 12:42:13.239: INFO: namespace e2e-tests-projected-lj7lx deletion completed in 24.288957609s

• [SLOW TEST:44.100 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:42:13.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 19 12:42:13.420: INFO: Waiting up to 5m0s for pod "downward-api-40c0bac2-5315-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-t5f26" to be "success or failure"
Feb 19 12:42:13.445: INFO: Pod "downward-api-40c0bac2-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 25.018375ms
Feb 19 12:42:15.478: INFO: Pod "downward-api-40c0bac2-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058585194s
Feb 19 12:42:17.505: INFO: Pod "downward-api-40c0bac2-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085281251s
Feb 19 12:42:20.389: INFO: Pod "downward-api-40c0bac2-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.9694686s
Feb 19 12:42:22.646: INFO: Pod "downward-api-40c0bac2-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.226635707s
Feb 19 12:42:24.687: INFO: Pod "downward-api-40c0bac2-5315-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.267343671s
STEP: Saw pod success
Feb 19 12:42:24.687: INFO: Pod "downward-api-40c0bac2-5315-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:42:24.725: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-40c0bac2-5315-11ea-a0a3-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 19 12:42:25.129: INFO: Waiting for pod downward-api-40c0bac2-5315-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:42:25.136: INFO: Pod downward-api-40c0bac2-5315-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:42:25.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-t5f26" for this suite.
Feb 19 12:42:31.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:42:31.538: INFO: namespace: e2e-tests-downward-api-t5f26, resource: bindings, ignored listing per whitelist
Feb 19 12:42:31.556: INFO: namespace e2e-tests-downward-api-t5f26 deletion completed in 6.408681063s

• [SLOW TEST:18.316 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:42:31.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 19 12:42:31.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:32.314: INFO: stderr: ""
Feb 19 12:42:32.314: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 19 12:42:32.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:32.444: INFO: stderr: ""
Feb 19 12:42:32.444: INFO: stdout: "update-demo-nautilus-6j4g2 update-demo-nautilus-hn5zq "
Feb 19 12:42:32.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6j4g2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:32.657: INFO: stderr: ""
Feb 19 12:42:32.658: INFO: stdout: ""
Feb 19 12:42:32.658: INFO: update-demo-nautilus-6j4g2 is created but not running
Feb 19 12:42:37.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:37.759: INFO: stderr: ""
Feb 19 12:42:37.759: INFO: stdout: "update-demo-nautilus-6j4g2 update-demo-nautilus-hn5zq "
Feb 19 12:42:37.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6j4g2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:37.883: INFO: stderr: ""
Feb 19 12:42:37.883: INFO: stdout: ""
Feb 19 12:42:37.883: INFO: update-demo-nautilus-6j4g2 is created but not running
Feb 19 12:42:42.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:43.163: INFO: stderr: ""
Feb 19 12:42:43.163: INFO: stdout: "update-demo-nautilus-6j4g2 update-demo-nautilus-hn5zq "
Feb 19 12:42:43.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6j4g2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:43.341: INFO: stderr: ""
Feb 19 12:42:43.341: INFO: stdout: ""
Feb 19 12:42:43.341: INFO: update-demo-nautilus-6j4g2 is created but not running
Feb 19 12:42:48.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:48.571: INFO: stderr: ""
Feb 19 12:42:48.571: INFO: stdout: "update-demo-nautilus-6j4g2 update-demo-nautilus-hn5zq "
Feb 19 12:42:48.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6j4g2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:48.732: INFO: stderr: ""
Feb 19 12:42:48.733: INFO: stdout: "true"
Feb 19 12:42:48.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6j4g2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:48.906: INFO: stderr: ""
Feb 19 12:42:48.906: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 19 12:42:48.906: INFO: validating pod update-demo-nautilus-6j4g2
Feb 19 12:42:48.923: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 19 12:42:48.923: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 19 12:42:48.923: INFO: update-demo-nautilus-6j4g2 is verified up and running
Feb 19 12:42:48.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hn5zq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:49.067: INFO: stderr: ""
Feb 19 12:42:49.068: INFO: stdout: "true"
Feb 19 12:42:49.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hn5zq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:49.154: INFO: stderr: ""
Feb 19 12:42:49.154: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 19 12:42:49.154: INFO: validating pod update-demo-nautilus-hn5zq
Feb 19 12:42:49.163: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 19 12:42:49.163: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 19 12:42:49.163: INFO: update-demo-nautilus-hn5zq is verified up and running
STEP: scaling down the replication controller
Feb 19 12:42:49.166: INFO: scanned /root for discovery docs: 
Feb 19 12:42:49.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:50.384: INFO: stderr: ""
Feb 19 12:42:50.384: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 19 12:42:50.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:52.293: INFO: stderr: ""
Feb 19 12:42:52.293: INFO: stdout: "update-demo-nautilus-6j4g2 update-demo-nautilus-hn5zq "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 19 12:42:57.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:42:57.511: INFO: stderr: ""
Feb 19 12:42:57.512: INFO: stdout: "update-demo-nautilus-6j4g2 update-demo-nautilus-hn5zq "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 19 12:43:02.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:02.870: INFO: stderr: ""
Feb 19 12:43:02.870: INFO: stdout: "update-demo-nautilus-6j4g2 update-demo-nautilus-hn5zq "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 19 12:43:07.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:07.976: INFO: stderr: ""
Feb 19 12:43:07.976: INFO: stdout: "update-demo-nautilus-hn5zq "
Feb 19 12:43:07.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hn5zq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:08.097: INFO: stderr: ""
Feb 19 12:43:08.097: INFO: stdout: "true"
Feb 19 12:43:08.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hn5zq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:08.254: INFO: stderr: ""
Feb 19 12:43:08.255: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 19 12:43:08.255: INFO: validating pod update-demo-nautilus-hn5zq
Feb 19 12:43:08.294: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 19 12:43:08.294: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 19 12:43:08.294: INFO: update-demo-nautilus-hn5zq is verified up and running
STEP: scaling up the replication controller
Feb 19 12:43:08.297: INFO: scanned /root for discovery docs: 
Feb 19 12:43:08.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:09.640: INFO: stderr: ""
Feb 19 12:43:09.641: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 19 12:43:09.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:10.122: INFO: stderr: ""
Feb 19 12:43:10.122: INFO: stdout: "update-demo-nautilus-hn5zq update-demo-nautilus-jcks4 "
Feb 19 12:43:10.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hn5zq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:10.279: INFO: stderr: ""
Feb 19 12:43:10.279: INFO: stdout: "true"
Feb 19 12:43:10.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hn5zq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:10.387: INFO: stderr: ""
Feb 19 12:43:10.387: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 19 12:43:10.387: INFO: validating pod update-demo-nautilus-hn5zq
Feb 19 12:43:10.397: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 19 12:43:10.397: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 19 12:43:10.397: INFO: update-demo-nautilus-hn5zq is verified up and running
Feb 19 12:43:10.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcks4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:10.509: INFO: stderr: ""
Feb 19 12:43:10.509: INFO: stdout: ""
Feb 19 12:43:10.509: INFO: update-demo-nautilus-jcks4 is created but not running
Feb 19 12:43:15.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:15.670: INFO: stderr: ""
Feb 19 12:43:15.670: INFO: stdout: "update-demo-nautilus-hn5zq update-demo-nautilus-jcks4 "
Feb 19 12:43:15.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hn5zq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:15.802: INFO: stderr: ""
Feb 19 12:43:15.802: INFO: stdout: "true"
Feb 19 12:43:15.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hn5zq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:15.910: INFO: stderr: ""
Feb 19 12:43:15.910: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 19 12:43:15.910: INFO: validating pod update-demo-nautilus-hn5zq
Feb 19 12:43:15.927: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 19 12:43:15.927: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 19 12:43:15.927: INFO: update-demo-nautilus-hn5zq is verified up and running
Feb 19 12:43:15.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcks4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:16.034: INFO: stderr: ""
Feb 19 12:43:16.034: INFO: stdout: ""
Feb 19 12:43:16.034: INFO: update-demo-nautilus-jcks4 is created but not running
Feb 19 12:43:21.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:21.197: INFO: stderr: ""
Feb 19 12:43:21.197: INFO: stdout: "update-demo-nautilus-hn5zq update-demo-nautilus-jcks4 "
Feb 19 12:43:21.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hn5zq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:21.375: INFO: stderr: ""
Feb 19 12:43:21.375: INFO: stdout: "true"
Feb 19 12:43:21.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hn5zq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:21.503: INFO: stderr: ""
Feb 19 12:43:21.503: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 19 12:43:21.503: INFO: validating pod update-demo-nautilus-hn5zq
Feb 19 12:43:21.509: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 19 12:43:21.509: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 19 12:43:21.510: INFO: update-demo-nautilus-hn5zq is verified up and running
Feb 19 12:43:21.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcks4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:21.603: INFO: stderr: ""
Feb 19 12:43:21.603: INFO: stdout: "true"
Feb 19 12:43:21.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcks4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:21.720: INFO: stderr: ""
Feb 19 12:43:21.720: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 19 12:43:21.720: INFO: validating pod update-demo-nautilus-jcks4
Feb 19 12:43:21.733: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 19 12:43:21.733: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 19 12:43:21.733: INFO: update-demo-nautilus-jcks4 is verified up and running
STEP: using delete to clean up resources
Feb 19 12:43:21.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:21.924: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 19 12:43:21.924: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 19 12:43:21.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-vzl8h'
Feb 19 12:43:22.254: INFO: stderr: "No resources found.\n"
Feb 19 12:43:22.254: INFO: stdout: ""
Feb 19 12:43:22.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-vzl8h -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 19 12:43:22.632: INFO: stderr: ""
Feb 19 12:43:22.633: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:43:22.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vzl8h" for this suite.
Feb 19 12:43:46.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:43:46.881: INFO: namespace: e2e-tests-kubectl-vzl8h, resource: bindings, ignored listing per whitelist
Feb 19 12:43:46.912: INFO: namespace e2e-tests-kubectl-vzl8h deletion completed in 24.262736863s

• [SLOW TEST:75.355 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:43:46.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:43:47.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rf7pv" for this suite.
Feb 19 12:44:11.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:44:11.557: INFO: namespace: e2e-tests-pods-rf7pv, resource: bindings, ignored listing per whitelist
Feb 19 12:44:11.563: INFO: namespace e2e-tests-pods-rf7pv deletion completed in 24.276789325s

• [SLOW TEST:24.651 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:44:11.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 19 12:44:11.890: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:44:13.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-7v4b7" for this suite.
Feb 19 12:44:21.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:44:21.284: INFO: namespace: e2e-tests-custom-resource-definition-7v4b7, resource: bindings, ignored listing per whitelist
Feb 19 12:44:21.383: INFO: namespace e2e-tests-custom-resource-definition-7v4b7 deletion completed in 8.226797931s

• [SLOW TEST:9.820 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:44:21.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 19 12:44:21.798: INFO: Waiting up to 5m0s for pod "pod-8d4472e9-5315-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-qpslh" to be "success or failure"
Feb 19 12:44:21.810: INFO: Pod "pod-8d4472e9-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.099098ms
Feb 19 12:44:24.401: INFO: Pod "pod-8d4472e9-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.603437952s
Feb 19 12:44:26.424: INFO: Pod "pod-8d4472e9-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.626160004s
Feb 19 12:44:28.884: INFO: Pod "pod-8d4472e9-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.086506356s
Feb 19 12:44:30.906: INFO: Pod "pod-8d4472e9-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.108007358s
Feb 19 12:44:32.929: INFO: Pod "pod-8d4472e9-5315-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.131381744s
STEP: Saw pod success
Feb 19 12:44:32.930: INFO: Pod "pod-8d4472e9-5315-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:44:32.957: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8d4472e9-5315-11ea-a0a3-0242ac110008 container test-container: 
STEP: delete the pod
Feb 19 12:44:33.125: INFO: Waiting for pod pod-8d4472e9-5315-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:44:33.134: INFO: Pod pod-8d4472e9-5315-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:44:33.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qpslh" for this suite.
Feb 19 12:44:39.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:44:39.335: INFO: namespace: e2e-tests-emptydir-qpslh, resource: bindings, ignored listing per whitelist
Feb 19 12:44:39.405: INFO: namespace e2e-tests-emptydir-qpslh deletion completed in 6.261718473s

• [SLOW TEST:18.021 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:44:39.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Feb 19 12:44:39.629: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 19 12:44:39.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t57d4'
Feb 19 12:44:40.145: INFO: stderr: ""
Feb 19 12:44:40.146: INFO: stdout: "service/redis-slave created\n"
Feb 19 12:44:40.147: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 19 12:44:40.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t57d4'
Feb 19 12:44:40.541: INFO: stderr: ""
Feb 19 12:44:40.541: INFO: stdout: "service/redis-master created\n"
Feb 19 12:44:40.542: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 19 12:44:40.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t57d4'
Feb 19 12:44:40.959: INFO: stderr: ""
Feb 19 12:44:40.959: INFO: stdout: "service/frontend created\n"
Feb 19 12:44:40.961: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 19 12:44:40.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t57d4'
Feb 19 12:44:41.420: INFO: stderr: ""
Feb 19 12:44:41.421: INFO: stdout: "deployment.extensions/frontend created\n"
Feb 19 12:44:41.423: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 19 12:44:41.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t57d4'
Feb 19 12:44:41.874: INFO: stderr: ""
Feb 19 12:44:41.874: INFO: stdout: "deployment.extensions/redis-master created\n"
Feb 19 12:44:41.876: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 19 12:44:41.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t57d4'
Feb 19 12:44:42.613: INFO: stderr: ""
Feb 19 12:44:42.613: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Feb 19 12:44:42.613: INFO: Waiting for all frontend pods to be Running.
Feb 19 12:45:17.668: INFO: Waiting for frontend to serve content.
Feb 19 12:45:17.728: INFO: Trying to add a new entry to the guestbook.
Feb 19 12:45:17.767: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb 19 12:45:17.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t57d4'
Feb 19 12:45:18.038: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 19 12:45:18.038: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 19 12:45:18.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t57d4'
Feb 19 12:45:18.318: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 19 12:45:18.319: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 19 12:45:18.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t57d4'
Feb 19 12:45:18.478: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 19 12:45:18.478: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 19 12:45:18.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t57d4'
Feb 19 12:45:18.659: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 19 12:45:18.659: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 19 12:45:18.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t57d4'
Feb 19 12:45:19.005: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 19 12:45:19.005: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 19 12:45:19.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t57d4'
Feb 19 12:45:19.379: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 19 12:45:19.380: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:45:19.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-t57d4" for this suite.
Feb 19 12:46:05.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:46:05.709: INFO: namespace: e2e-tests-kubectl-t57d4, resource: bindings, ignored listing per whitelist
Feb 19 12:46:05.770: INFO: namespace e2e-tests-kubectl-t57d4 deletion completed in 46.374308896s

• [SLOW TEST:86.365 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:46:05.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Feb 19 12:46:06.059: INFO: Waiting up to 5m0s for pod "var-expansion-cb6ae4fe-5315-11ea-a0a3-0242ac110008" in namespace "e2e-tests-var-expansion-68jtk" to be "success or failure"
Feb 19 12:46:06.063: INFO: Pod "var-expansion-cb6ae4fe-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34723ms
Feb 19 12:46:08.091: INFO: Pod "var-expansion-cb6ae4fe-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032395286s
Feb 19 12:46:10.105: INFO: Pod "var-expansion-cb6ae4fe-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046749377s
Feb 19 12:46:12.910: INFO: Pod "var-expansion-cb6ae4fe-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.851238539s
Feb 19 12:46:15.007: INFO: Pod "var-expansion-cb6ae4fe-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.948474219s
Feb 19 12:46:17.023: INFO: Pod "var-expansion-cb6ae4fe-5315-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.963917208s
STEP: Saw pod success
Feb 19 12:46:17.023: INFO: Pod "var-expansion-cb6ae4fe-5315-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:46:17.027: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-cb6ae4fe-5315-11ea-a0a3-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 19 12:46:17.137: INFO: Waiting for pod var-expansion-cb6ae4fe-5315-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:46:17.141: INFO: Pod var-expansion-cb6ae4fe-5315-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:46:17.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-68jtk" for this suite.
Feb 19 12:46:23.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:46:23.500: INFO: namespace: e2e-tests-var-expansion-68jtk, resource: bindings, ignored listing per whitelist
Feb 19 12:46:23.551: INFO: namespace e2e-tests-var-expansion-68jtk deletion completed in 6.402003468s

• [SLOW TEST:17.781 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:46:23.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 19 12:46:23.881: INFO: Creating ReplicaSet my-hostname-basic-d60cf40b-5315-11ea-a0a3-0242ac110008
Feb 19 12:46:23.998: INFO: Pod name my-hostname-basic-d60cf40b-5315-11ea-a0a3-0242ac110008: Found 0 pods out of 1
Feb 19 12:46:29.774: INFO: Pod name my-hostname-basic-d60cf40b-5315-11ea-a0a3-0242ac110008: Found 1 pods out of 1
Feb 19 12:46:29.774: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d60cf40b-5315-11ea-a0a3-0242ac110008" is running
Feb 19 12:46:34.292: INFO: Pod "my-hostname-basic-d60cf40b-5315-11ea-a0a3-0242ac110008-lr8vd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-19 12:46:24 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-19 12:46:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d60cf40b-5315-11ea-a0a3-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-19 12:46:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d60cf40b-5315-11ea-a0a3-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-19 12:46:24 +0000 UTC Reason: Message:}])
Feb 19 12:46:34.292: INFO: Trying to dial the pod
Feb 19 12:46:39.341: INFO: Controller my-hostname-basic-d60cf40b-5315-11ea-a0a3-0242ac110008: Got expected result from replica 1 [my-hostname-basic-d60cf40b-5315-11ea-a0a3-0242ac110008-lr8vd]: "my-hostname-basic-d60cf40b-5315-11ea-a0a3-0242ac110008-lr8vd", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:46:39.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-82462" for this suite.
Feb 19 12:46:45.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:46:46.681: INFO: namespace: e2e-tests-replicaset-82462, resource: bindings, ignored listing per whitelist
Feb 19 12:46:46.784: INFO: namespace e2e-tests-replicaset-82462 deletion completed in 7.431439295s

• [SLOW TEST:23.232 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:46:46.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0219 12:47:20.953512       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 19 12:47:20.953: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:47:20.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-m28dj" for this suite.
Feb 19 12:47:31.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:47:31.226: INFO: namespace: e2e-tests-gc-m28dj, resource: bindings, ignored listing per whitelist
Feb 19 12:47:31.226: INFO: namespace e2e-tests-gc-m28dj deletion completed in 10.267686657s

• [SLOW TEST:44.441 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:47:31.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 19 12:47:31.771: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe81285a-5315-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-vx2lv" to be "success or failure"
Feb 19 12:47:32.559: INFO: Pod "downwardapi-volume-fe81285a-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 787.542752ms
Feb 19 12:47:34.597: INFO: Pod "downwardapi-volume-fe81285a-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.82618555s
Feb 19 12:47:36.608: INFO: Pod "downwardapi-volume-fe81285a-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.837101468s
Feb 19 12:47:40.011: INFO: Pod "downwardapi-volume-fe81285a-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.240298545s
Feb 19 12:47:42.382: INFO: Pod "downwardapi-volume-fe81285a-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.61088573s
Feb 19 12:47:44.415: INFO: Pod "downwardapi-volume-fe81285a-5315-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.644196608s
Feb 19 12:47:46.432: INFO: Pod "downwardapi-volume-fe81285a-5315-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.660922332s
STEP: Saw pod success
Feb 19 12:47:46.432: INFO: Pod "downwardapi-volume-fe81285a-5315-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:47:46.438: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fe81285a-5315-11ea-a0a3-0242ac110008 container client-container: 
STEP: delete the pod
Feb 19 12:47:46.643: INFO: Waiting for pod downwardapi-volume-fe81285a-5315-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:47:46.657: INFO: Pod downwardapi-volume-fe81285a-5315-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:47:46.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vx2lv" for this suite.
Feb 19 12:47:52.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:47:53.107: INFO: namespace: e2e-tests-downward-api-vx2lv, resource: bindings, ignored listing per whitelist
Feb 19 12:47:53.193: INFO: namespace e2e-tests-downward-api-vx2lv deletion completed in 6.528997625s

• [SLOW TEST:21.967 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:47:53.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:47:59.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-pwc7l" for this suite.
Feb 19 12:48:06.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:48:06.177: INFO: namespace: e2e-tests-namespaces-pwc7l, resource: bindings, ignored listing per whitelist
Feb 19 12:48:06.296: INFO: namespace e2e-tests-namespaces-pwc7l deletion completed in 6.345216452s
STEP: Destroying namespace "e2e-tests-nsdeletetest-48mk5" for this suite.
Feb 19 12:48:06.335: INFO: Namespace e2e-tests-nsdeletetest-48mk5 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-6278v" for this suite.
Feb 19 12:48:12.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:48:12.703: INFO: namespace: e2e-tests-nsdeletetest-6278v, resource: bindings, ignored listing per whitelist
Feb 19 12:48:12.710: INFO: namespace e2e-tests-nsdeletetest-6278v deletion completed in 6.374175518s

• [SLOW TEST:19.516 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:48:12.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-bbdz
STEP: Creating a pod to test atomic-volume-subpath
Feb 19 12:48:13.054: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bbdz" in namespace "e2e-tests-subpath-fhntv" to be "success or failure"
Feb 19 12:48:13.063: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.940314ms
Feb 19 12:48:15.091: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036663617s
Feb 19 12:48:17.105: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050213335s
Feb 19 12:48:19.120: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065433055s
Feb 19 12:48:21.236: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181487646s
Feb 19 12:48:23.253: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.198425028s
Feb 19 12:48:25.262: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.207911675s
Feb 19 12:48:27.667: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.612872302s
Feb 19 12:48:29.688: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.633222842s
Feb 19 12:48:31.706: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Running", Reason="", readiness=false. Elapsed: 18.652014922s
Feb 19 12:48:33.745: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Running", Reason="", readiness=false. Elapsed: 20.690780274s
Feb 19 12:48:35.757: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Running", Reason="", readiness=false. Elapsed: 22.70250616s
Feb 19 12:48:37.773: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Running", Reason="", readiness=false. Elapsed: 24.718899605s
Feb 19 12:48:39.787: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Running", Reason="", readiness=false. Elapsed: 26.732193448s
Feb 19 12:48:41.805: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Running", Reason="", readiness=false. Elapsed: 28.750235632s
Feb 19 12:48:43.886: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Running", Reason="", readiness=false. Elapsed: 30.831448334s
Feb 19 12:48:45.910: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Running", Reason="", readiness=false. Elapsed: 32.855230914s
Feb 19 12:48:47.924: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Running", Reason="", readiness=false. Elapsed: 34.869279871s
Feb 19 12:48:49.947: INFO: Pod "pod-subpath-test-configmap-bbdz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.892921517s
STEP: Saw pod success
Feb 19 12:48:49.947: INFO: Pod "pod-subpath-test-configmap-bbdz" satisfied condition "success or failure"
Feb 19 12:48:49.956: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-bbdz container test-container-subpath-configmap-bbdz: 
STEP: delete the pod
Feb 19 12:48:50.762: INFO: Waiting for pod pod-subpath-test-configmap-bbdz to disappear
Feb 19 12:48:51.264: INFO: Pod pod-subpath-test-configmap-bbdz no longer exists
STEP: Deleting pod pod-subpath-test-configmap-bbdz
Feb 19 12:48:51.264: INFO: Deleting pod "pod-subpath-test-configmap-bbdz" in namespace "e2e-tests-subpath-fhntv"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:48:51.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-fhntv" for this suite.
Feb 19 12:48:57.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:48:57.562: INFO: namespace: e2e-tests-subpath-fhntv, resource: bindings, ignored listing per whitelist
Feb 19 12:48:57.617: INFO: namespace e2e-tests-subpath-fhntv deletion completed in 6.313349471s

• [SLOW TEST:44.907 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:48:57.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 19 12:48:57.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:48:58.200: INFO: stderr: ""
Feb 19 12:48:58.200: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 19 12:48:58.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:48:58.453: INFO: stderr: ""
Feb 19 12:48:58.454: INFO: stdout: "update-demo-nautilus-7dsjg update-demo-nautilus-h5h94 "
Feb 19 12:48:58.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7dsjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:48:58.632: INFO: stderr: ""
Feb 19 12:48:58.632: INFO: stdout: ""
Feb 19 12:48:58.633: INFO: update-demo-nautilus-7dsjg is created but not running
Feb 19 12:49:03.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:49:03.791: INFO: stderr: ""
Feb 19 12:49:03.791: INFO: stdout: "update-demo-nautilus-7dsjg update-demo-nautilus-h5h94 "
Feb 19 12:49:03.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7dsjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:49:03.980: INFO: stderr: ""
Feb 19 12:49:03.980: INFO: stdout: ""
Feb 19 12:49:03.981: INFO: update-demo-nautilus-7dsjg is created but not running
Feb 19 12:49:08.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:49:09.131: INFO: stderr: ""
Feb 19 12:49:09.131: INFO: stdout: "update-demo-nautilus-7dsjg update-demo-nautilus-h5h94 "
Feb 19 12:49:09.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7dsjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:49:09.240: INFO: stderr: ""
Feb 19 12:49:09.240: INFO: stdout: ""
Feb 19 12:49:09.240: INFO: update-demo-nautilus-7dsjg is created but not running
Feb 19 12:49:14.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:49:14.385: INFO: stderr: ""
Feb 19 12:49:14.386: INFO: stdout: "update-demo-nautilus-7dsjg update-demo-nautilus-h5h94 "
Feb 19 12:49:14.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7dsjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:49:14.495: INFO: stderr: ""
Feb 19 12:49:14.495: INFO: stdout: "true"
Feb 19 12:49:14.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7dsjg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:49:14.606: INFO: stderr: ""
Feb 19 12:49:14.606: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 19 12:49:14.606: INFO: validating pod update-demo-nautilus-7dsjg
Feb 19 12:49:14.644: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 19 12:49:14.644: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 19 12:49:14.644: INFO: update-demo-nautilus-7dsjg is verified up and running
Feb 19 12:49:14.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h5h94 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:49:14.721: INFO: stderr: ""
Feb 19 12:49:14.721: INFO: stdout: "true"
Feb 19 12:49:14.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h5h94 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:49:14.900: INFO: stderr: ""
Feb 19 12:49:14.901: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 19 12:49:14.901: INFO: validating pod update-demo-nautilus-h5h94
Feb 19 12:49:14.911: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 19 12:49:14.911: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 19 12:49:14.911: INFO: update-demo-nautilus-h5h94 is verified up and running
STEP: using delete to clean up resources
Feb 19 12:49:14.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:49:15.030: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 19 12:49:15.030: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 19 12:49:15.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-qs6fz'
Feb 19 12:49:15.270: INFO: stderr: "No resources found.\n"
Feb 19 12:49:15.271: INFO: stdout: ""
Feb 19 12:49:15.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-qs6fz -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 19 12:49:15.411: INFO: stderr: ""
Feb 19 12:49:15.411: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:49:15.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qs6fz" for this suite.
Feb 19 12:49:38.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:49:39.033: INFO: namespace: e2e-tests-kubectl-qs6fz, resource: bindings, ignored listing per whitelist
Feb 19 12:49:39.115: INFO: namespace e2e-tests-kubectl-qs6fz deletion completed in 23.665719492s

• [SLOW TEST:41.498 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:49:39.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 19 12:49:39.322: INFO: PodSpec: initContainers in spec.initContainers
Feb 19 12:50:55.027: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4a8ae549-5316-11ea-a0a3-0242ac110008", GenerateName:"", Namespace:"e2e-tests-init-container-87mjg", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-87mjg/pods/pod-init-4a8ae549-5316-11ea-a0a3-0242ac110008", UID:"4a8bcab3-5316-11ea-a994-fa163e34d433", ResourceVersion:"22202248", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717713379, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"322424159"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7tmnx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001c5c800), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7tmnx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7tmnx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7tmnx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f7c268), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0024bbb00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f7c2e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f7c300)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001f7c308), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f7c30c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717713379, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717713379, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717713379, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717713379, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0018c79c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001ca1420)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001ca1490)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://b39d24e6f1e4c120d24c4610082ebb429b88a71cd9a25429b35e247799d165cf"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0018c7a00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0018c79e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:50:55.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-87mjg" for this suite.
Feb 19 12:51:19.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:51:19.348: INFO: namespace: e2e-tests-init-container-87mjg, resource: bindings, ignored listing per whitelist
Feb 19 12:51:19.357: INFO: namespace e2e-tests-init-container-87mjg deletion completed in 24.313225191s

• [SLOW TEST:100.241 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:51:19.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-6hsz
STEP: Creating a pod to test atomic-volume-subpath
Feb 19 12:51:19.584: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6hsz" in namespace "e2e-tests-subpath-hzl8z" to be "success or failure"
Feb 19 12:51:19.609: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Pending", Reason="", readiness=false. Elapsed: 25.864016ms
Feb 19 12:51:21.638: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054366794s
Feb 19 12:51:23.655: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071194868s
Feb 19 12:51:25.689: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105570272s
Feb 19 12:51:28.432: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.848221912s
Feb 19 12:51:30.452: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.868415026s
Feb 19 12:51:32.470: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.88684897s
Feb 19 12:51:34.488: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.904073227s
Feb 19 12:51:36.511: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.92754375s
Feb 19 12:51:38.540: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Pending", Reason="", readiness=false. Elapsed: 18.95674952s
Feb 19 12:51:40.578: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Running", Reason="", readiness=false. Elapsed: 20.994063049s
Feb 19 12:51:42.616: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Running", Reason="", readiness=false. Elapsed: 23.032071729s
Feb 19 12:51:44.632: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Running", Reason="", readiness=false. Elapsed: 25.048314422s
Feb 19 12:51:46.647: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Running", Reason="", readiness=false. Elapsed: 27.063165679s
Feb 19 12:51:48.666: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Running", Reason="", readiness=false. Elapsed: 29.082016065s
Feb 19 12:51:50.696: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Running", Reason="", readiness=false. Elapsed: 31.111995465s
Feb 19 12:51:52.732: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Running", Reason="", readiness=false. Elapsed: 33.147918303s
Feb 19 12:51:54.811: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Running", Reason="", readiness=false. Elapsed: 35.227798847s
Feb 19 12:51:56.833: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Running", Reason="", readiness=false. Elapsed: 37.249407449s
Feb 19 12:51:59.337: INFO: Pod "pod-subpath-test-projected-6hsz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.753835963s
STEP: Saw pod success
Feb 19 12:51:59.338: INFO: Pod "pod-subpath-test-projected-6hsz" satisfied condition "success or failure"
Feb 19 12:51:59.353: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-6hsz container test-container-subpath-projected-6hsz: 
STEP: delete the pod
Feb 19 12:52:00.116: INFO: Waiting for pod pod-subpath-test-projected-6hsz to disappear
Feb 19 12:52:00.122: INFO: Pod pod-subpath-test-projected-6hsz no longer exists
STEP: Deleting pod pod-subpath-test-projected-6hsz
Feb 19 12:52:00.122: INFO: Deleting pod "pod-subpath-test-projected-6hsz" in namespace "e2e-tests-subpath-hzl8z"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:52:00.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-hzl8z" for this suite.
Feb 19 12:52:06.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:52:06.316: INFO: namespace: e2e-tests-subpath-hzl8z, resource: bindings, ignored listing per whitelist
Feb 19 12:52:06.435: INFO: namespace e2e-tests-subpath-hzl8z deletion completed in 6.301425073s

• [SLOW TEST:47.078 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:52:06.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-a2631e0b-5316-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 19 12:52:06.731: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a264d638-5316-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-sk8ck" to be "success or failure"
Feb 19 12:52:06.766: INFO: Pod "pod-projected-secrets-a264d638-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 34.945657ms
Feb 19 12:52:08.803: INFO: Pod "pod-projected-secrets-a264d638-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071639676s
Feb 19 12:52:10.823: INFO: Pod "pod-projected-secrets-a264d638-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091233242s
Feb 19 12:52:12.835: INFO: Pod "pod-projected-secrets-a264d638-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103496779s
Feb 19 12:52:14.995: INFO: Pod "pod-projected-secrets-a264d638-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263250754s
Feb 19 12:52:17.039: INFO: Pod "pod-projected-secrets-a264d638-5316-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.307713583s
STEP: Saw pod success
Feb 19 12:52:17.039: INFO: Pod "pod-projected-secrets-a264d638-5316-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:52:17.049: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a264d638-5316-11ea-a0a3-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 19 12:52:17.235: INFO: Waiting for pod pod-projected-secrets-a264d638-5316-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:52:17.241: INFO: Pod pod-projected-secrets-a264d638-5316-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:52:17.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sk8ck" for this suite.
Feb 19 12:52:23.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:52:23.334: INFO: namespace: e2e-tests-projected-sk8ck, resource: bindings, ignored listing per whitelist
Feb 19 12:52:23.427: INFO: namespace e2e-tests-projected-sk8ck deletion completed in 6.178524475s

• [SLOW TEST:16.992 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:52:23.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Feb 19 12:52:23.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-7jphj run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 19 12:52:37.632: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0219 12:52:36.211312    4515 log.go:172] (0xc000138790) (0xc0007c4b40) Create stream\nI0219 12:52:36.211523    4515 log.go:172] (0xc000138790) (0xc0007c4b40) Stream added, broadcasting: 1\nI0219 12:52:36.220548    4515 log.go:172] (0xc000138790) Reply frame received for 1\nI0219 12:52:36.220607    4515 log.go:172] (0xc000138790) (0xc00033c000) Create stream\nI0219 12:52:36.220623    4515 log.go:172] (0xc000138790) (0xc00033c000) Stream added, broadcasting: 3\nI0219 12:52:36.221989    4515 log.go:172] (0xc000138790) Reply frame received for 3\nI0219 12:52:36.222038    4515 log.go:172] (0xc000138790) (0xc00033c0a0) Create stream\nI0219 12:52:36.222057    4515 log.go:172] (0xc000138790) (0xc00033c0a0) Stream added, broadcasting: 5\nI0219 12:52:36.223728    4515 log.go:172] (0xc000138790) Reply frame received for 5\nI0219 12:52:36.223758    4515 log.go:172] (0xc000138790) (0xc0007c4be0) Create stream\nI0219 12:52:36.223764    4515 log.go:172] (0xc000138790) (0xc0007c4be0) Stream added, broadcasting: 7\nI0219 12:52:36.225444    4515 log.go:172] (0xc000138790) Reply frame received for 7\nI0219 12:52:36.225797    4515 log.go:172] (0xc00033c000) (3) Writing data frame\nI0219 12:52:36.225980    4515 log.go:172] (0xc00033c000) (3) Writing data frame\nI0219 12:52:36.237135    4515 log.go:172] (0xc000138790) Data frame received for 5\nI0219 12:52:36.237155    4515 log.go:172] (0xc00033c0a0) (5) Data frame handling\nI0219 12:52:36.237183    4515 log.go:172] (0xc00033c0a0) (5) Data frame sent\nI0219 12:52:36.240810    4515 log.go:172] (0xc000138790) Data frame received for 5\nI0219 12:52:36.240878    4515 log.go:172] (0xc00033c0a0) (5) Data frame handling\nI0219 12:52:36.240916    4515 log.go:172] (0xc00033c0a0) (5) Data frame sent\nI0219 12:52:37.559742    4515 log.go:172] (0xc000138790) (0xc00033c000) Stream removed, broadcasting: 3\nI0219 12:52:37.559839    4515 log.go:172] (0xc000138790) Data frame received for 1\nI0219 12:52:37.559863    4515 log.go:172] (0xc0007c4b40) (1) Data frame handling\nI0219 12:52:37.559872    4515 log.go:172] (0xc0007c4b40) (1) Data frame sent\nI0219 12:52:37.559891    4515 log.go:172] (0xc000138790) (0xc0007c4b40) Stream removed, broadcasting: 1\nI0219 12:52:37.560196    4515 log.go:172] (0xc000138790) (0xc00033c0a0) Stream removed, broadcasting: 5\nI0219 12:52:37.560300    4515 log.go:172] (0xc000138790) (0xc0007c4be0) Stream removed, broadcasting: 7\nI0219 12:52:37.560480    4515 log.go:172] (0xc000138790) Go away received\nI0219 12:52:37.560681    4515 log.go:172] (0xc000138790) (0xc0007c4b40) Stream removed, broadcasting: 1\nI0219 12:52:37.560719    4515 log.go:172] (0xc000138790) (0xc00033c000) Stream removed, broadcasting: 3\nI0219 12:52:37.560732    4515 log.go:172] (0xc000138790) (0xc00033c0a0) Stream removed, broadcasting: 5\nI0219 12:52:37.560749    4515 log.go:172] (0xc000138790) (0xc0007c4be0) Stream removed, broadcasting: 7\n"
Feb 19 12:52:37.632: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:52:39.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7jphj" for this suite.
Feb 19 12:52:46.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:52:47.036: INFO: namespace: e2e-tests-kubectl-7jphj, resource: bindings, ignored listing per whitelist
Feb 19 12:52:47.063: INFO: namespace e2e-tests-kubectl-7jphj deletion completed in 7.403741101s

• [SLOW TEST:23.635 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:52:47.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-g7vf4 in namespace e2e-tests-proxy-dlth6
I0219 12:52:47.402245       8 runners.go:184] Created replication controller with name: proxy-service-g7vf4, namespace: e2e-tests-proxy-dlth6, replica count: 1
I0219 12:52:48.453309       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:52:49.453853       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:52:50.454519       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:52:51.455019       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:52:52.455513       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:52:53.456259       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:52:54.457160       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:52:55.457972       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:52:56.458886       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:52:57.459595       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0219 12:52:58.460528       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0219 12:52:59.462114       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0219 12:53:00.463277       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0219 12:53:01.464465       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0219 12:53:02.465195       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0219 12:53:03.465821       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0219 12:53:04.466908       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0219 12:53:05.467862       8 runners.go:184] proxy-service-g7vf4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 19 12:53:05.477: INFO: setup took 18.235718624s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 19 12:53:05.567: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-dlth6/pods/proxy-service-g7vf4-zn9lv/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-d4fbe639-5316-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 19 12:53:31.618: INFO: Waiting up to 5m0s for pod "pod-secrets-d4fdd3e7-5316-11ea-a0a3-0242ac110008" in namespace "e2e-tests-secrets-vjh4j" to be "success or failure"
Feb 19 12:53:31.651: INFO: Pod "pod-secrets-d4fdd3e7-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 33.160555ms
Feb 19 12:53:34.767: INFO: Pod "pod-secrets-d4fdd3e7-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.148825332s
Feb 19 12:53:36.776: INFO: Pod "pod-secrets-d4fdd3e7-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.158681085s
Feb 19 12:53:38.794: INFO: Pod "pod-secrets-d4fdd3e7-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.176211493s
Feb 19 12:53:40.808: INFO: Pod "pod-secrets-d4fdd3e7-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.190196716s
Feb 19 12:53:42.834: INFO: Pod "pod-secrets-d4fdd3e7-5316-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.216705068s
STEP: Saw pod success
Feb 19 12:53:42.835: INFO: Pod "pod-secrets-d4fdd3e7-5316-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:53:42.844: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d4fdd3e7-5316-11ea-a0a3-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 19 12:53:43.205: INFO: Waiting for pod pod-secrets-d4fdd3e7-5316-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:53:43.228: INFO: Pod pod-secrets-d4fdd3e7-5316-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:53:43.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-vjh4j" for this suite.
Feb 19 12:53:49.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:53:49.345: INFO: namespace: e2e-tests-secrets-vjh4j, resource: bindings, ignored listing per whitelist
Feb 19 12:53:49.450: INFO: namespace e2e-tests-secrets-vjh4j deletion completed in 6.210297626s

• [SLOW TEST:18.313 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:53:49.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Feb 19 12:53:49.687: INFO: Waiting up to 5m0s for pod "client-containers-dfc27e07-5316-11ea-a0a3-0242ac110008" in namespace "e2e-tests-containers-2qk7n" to be "success or failure"
Feb 19 12:53:49.707: INFO: Pod "client-containers-dfc27e07-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.915541ms
Feb 19 12:53:52.156: INFO: Pod "client-containers-dfc27e07-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.468984966s
Feb 19 12:53:54.178: INFO: Pod "client-containers-dfc27e07-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.491467095s
Feb 19 12:53:56.445: INFO: Pod "client-containers-dfc27e07-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.758553184s
Feb 19 12:53:58.526: INFO: Pod "client-containers-dfc27e07-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.838958675s
Feb 19 12:54:00.561: INFO: Pod "client-containers-dfc27e07-5316-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.874201179s
STEP: Saw pod success
Feb 19 12:54:00.561: INFO: Pod "client-containers-dfc27e07-5316-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:54:00.578: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-dfc27e07-5316-11ea-a0a3-0242ac110008 container test-container: 
STEP: delete the pod
Feb 19 12:54:00.726: INFO: Waiting for pod client-containers-dfc27e07-5316-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:54:00.742: INFO: Pod client-containers-dfc27e07-5316-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:54:00.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-2qk7n" for this suite.
Feb 19 12:54:06.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:54:06.986: INFO: namespace: e2e-tests-containers-2qk7n, resource: bindings, ignored listing per whitelist
Feb 19 12:54:07.110: INFO: namespace e2e-tests-containers-2qk7n deletion completed in 6.35360797s

• [SLOW TEST:17.659 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:54:07.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-ea4ad587-5316-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 19 12:54:07.587: INFO: Waiting up to 5m0s for pod "pod-secrets-ea6f5a07-5316-11ea-a0a3-0242ac110008" in namespace "e2e-tests-secrets-xrp69" to be "success or failure"
Feb 19 12:54:07.608: INFO: Pod "pod-secrets-ea6f5a07-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 20.883337ms
Feb 19 12:54:10.043: INFO: Pod "pod-secrets-ea6f5a07-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.45648145s
Feb 19 12:54:12.132: INFO: Pod "pod-secrets-ea6f5a07-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.545296178s
Feb 19 12:54:14.170: INFO: Pod "pod-secrets-ea6f5a07-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583490005s
Feb 19 12:54:16.560: INFO: Pod "pod-secrets-ea6f5a07-5316-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.972816214s
Feb 19 12:54:18.848: INFO: Pod "pod-secrets-ea6f5a07-5316-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.26109261s
STEP: Saw pod success
Feb 19 12:54:18.848: INFO: Pod "pod-secrets-ea6f5a07-5316-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 12:54:18.863: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-ea6f5a07-5316-11ea-a0a3-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 19 12:54:19.259: INFO: Waiting for pod pod-secrets-ea6f5a07-5316-11ea-a0a3-0242ac110008 to disappear
Feb 19 12:54:19.282: INFO: Pod pod-secrets-ea6f5a07-5316-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:54:19.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xrp69" for this suite.
Feb 19 12:54:25.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:54:25.636: INFO: namespace: e2e-tests-secrets-xrp69, resource: bindings, ignored listing per whitelist
Feb 19 12:54:25.662: INFO: namespace e2e-tests-secrets-xrp69 deletion completed in 6.343565637s
STEP: Destroying namespace "e2e-tests-secret-namespace-tzlhh" for this suite.
Feb 19 12:54:31.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:54:32.459: INFO: namespace: e2e-tests-secret-namespace-tzlhh, resource: bindings, ignored listing per whitelist
Feb 19 12:54:32.599: INFO: namespace e2e-tests-secret-namespace-tzlhh deletion completed in 6.937438214s

• [SLOW TEST:25.489 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:54:32.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 19 12:54:33.166: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 19 12:54:33.183: INFO: Waiting for terminating namespaces to be deleted...
Feb 19 12:54:33.190: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 19 12:54:33.224: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 19 12:54:33.224: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 19 12:54:33.224: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 19 12:54:33.224: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 19 12:54:33.224: INFO: 	Container weave ready: true, restart count 0
Feb 19 12:54:33.224: INFO: 	Container weave-npc ready: true, restart count 0
Feb 19 12:54:33.224: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 19 12:54:33.224: INFO: 	Container coredns ready: true, restart count 0
Feb 19 12:54:33.224: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 19 12:54:33.224: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 19 12:54:33.224: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 19 12:54:33.224: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 19 12:54:33.224: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-03cc3662-5317-11ea-a0a3-0242ac110008 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-03cc3662-5317-11ea-a0a3-0242ac110008 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-03cc3662-5317-11ea-a0a3-0242ac110008
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:55:09.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-qmx67" for this suite.
Feb 19 12:55:46.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:55:46.353: INFO: namespace: e2e-tests-sched-pred-qmx67, resource: bindings, ignored listing per whitelist
Feb 19 12:55:46.362: INFO: namespace e2e-tests-sched-pred-qmx67 deletion completed in 36.367895652s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:73.760 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:55:46.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 19 12:55:47.259: INFO: Number of nodes with available pods: 0
Feb 19 12:55:47.259: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:55:49.076: INFO: Number of nodes with available pods: 0
Feb 19 12:55:49.076: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:55:49.880: INFO: Number of nodes with available pods: 0
Feb 19 12:55:49.880: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:55:50.524: INFO: Number of nodes with available pods: 0
Feb 19 12:55:50.524: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:55:51.352: INFO: Number of nodes with available pods: 0
Feb 19 12:55:51.352: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:55:52.399: INFO: Number of nodes with available pods: 0
Feb 19 12:55:52.399: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:55:53.287: INFO: Number of nodes with available pods: 0
Feb 19 12:55:53.287: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:55:55.266: INFO: Number of nodes with available pods: 0
Feb 19 12:55:55.266: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:55:56.930: INFO: Number of nodes with available pods: 0
Feb 19 12:55:56.930: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:55:57.277: INFO: Number of nodes with available pods: 0
Feb 19 12:55:57.277: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 19 12:55:58.281: INFO: Number of nodes with available pods: 1
Feb 19 12:55:58.281: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 19 12:55:58.327: INFO: Number of nodes with available pods: 1
Feb 19 12:55:58.328: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-sj78f, will wait for the garbage collector to delete the pods
Feb 19 12:55:59.930: INFO: Deleting DaemonSet.extensions daemon-set took: 41.377281ms
Feb 19 12:56:00.431: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.575656ms
Feb 19 12:56:08.767: INFO: Number of nodes with available pods: 0
Feb 19 12:56:08.767: INFO: Number of running nodes: 0, number of available pods: 0
Feb 19 12:56:08.774: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-sj78f/daemonsets","resourceVersion":"22202926"},"items":null}

Feb 19 12:56:08.781: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-sj78f/pods","resourceVersion":"22202926"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:56:08.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-sj78f" for this suite.
Feb 19 12:56:16.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:56:16.961: INFO: namespace: e2e-tests-daemonsets-sj78f, resource: bindings, ignored listing per whitelist
Feb 19 12:56:16.996: INFO: namespace e2e-tests-daemonsets-sj78f deletion completed in 8.190172939s

• [SLOW TEST:30.633 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:56:16.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:56:17.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-n2s9s" for this suite.
Feb 19 12:56:25.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:56:26.141: INFO: namespace: e2e-tests-kubelet-test-n2s9s, resource: bindings, ignored listing per whitelist
Feb 19 12:56:26.163: INFO: namespace e2e-tests-kubelet-test-n2s9s deletion completed in 8.390373241s

• [SLOW TEST:9.166 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:56:26.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 19 12:56:37.240: INFO: Successfully updated pod "pod-update-3d3241b7-5317-11ea-a0a3-0242ac110008"
STEP: verifying the updated pod is in kubernetes
Feb 19 12:56:37.341: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:56:37.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-gt4g5" for this suite.
Feb 19 12:57:03.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:57:03.542: INFO: namespace: e2e-tests-pods-gt4g5, resource: bindings, ignored listing per whitelist
Feb 19 12:57:04.815: INFO: namespace e2e-tests-pods-gt4g5 deletion completed in 27.458088278s

• [SLOW TEST:38.652 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:57:04.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0219 12:57:15.809827       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 19 12:57:15.810: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:57:15.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-kffp5" for this suite.
Feb 19 12:57:23.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:57:23.979: INFO: namespace: e2e-tests-gc-kffp5, resource: bindings, ignored listing per whitelist
Feb 19 12:57:24.258: INFO: namespace e2e-tests-gc-kffp5 deletion completed in 8.442383972s

• [SLOW TEST:19.442 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:57:24.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-rjwh6
I0219 12:57:24.460950       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-rjwh6, replica count: 1
I0219 12:57:25.512886       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:57:26.513953       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:57:27.515043       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:57:28.516036       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:57:29.516666       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:57:30.517574       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:57:31.518878       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:57:32.519699       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:57:33.520584       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:57:34.521639       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0219 12:57:35.522254       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 19 12:57:35.688: INFO: Created: latency-svc-txjkq
Feb 19 12:57:35.819: INFO: Got endpoints: latency-svc-txjkq [196.347241ms]
Feb 19 12:57:36.008: INFO: Created: latency-svc-48f25
Feb 19 12:57:36.051: INFO: Created: latency-svc-m9fsg
Feb 19 12:57:36.052: INFO: Got endpoints: latency-svc-48f25 [231.6996ms]
Feb 19 12:57:36.174: INFO: Got endpoints: latency-svc-m9fsg [350.42294ms]
Feb 19 12:57:36.192: INFO: Created: latency-svc-25987
Feb 19 12:57:36.216: INFO: Got endpoints: latency-svc-25987 [396.110394ms]
Feb 19 12:57:36.287: INFO: Created: latency-svc-zvm4b
Feb 19 12:57:36.497: INFO: Got endpoints: latency-svc-zvm4b [675.102157ms]
Feb 19 12:57:36.540: INFO: Created: latency-svc-mzssf
Feb 19 12:57:36.573: INFO: Got endpoints: latency-svc-mzssf [748.561947ms]
Feb 19 12:57:36.744: INFO: Created: latency-svc-77rrx
Feb 19 12:57:36.757: INFO: Got endpoints: latency-svc-77rrx [932.767094ms]
Feb 19 12:57:36.939: INFO: Created: latency-svc-8m6wg
Feb 19 12:57:36.965: INFO: Got endpoints: latency-svc-8m6wg [1.141410074s]
Feb 19 12:57:37.010: INFO: Created: latency-svc-vxlh2
Feb 19 12:57:37.032: INFO: Got endpoints: latency-svc-vxlh2 [1.208152312s]
Feb 19 12:57:37.151: INFO: Created: latency-svc-sp48m
Feb 19 12:57:37.214: INFO: Got endpoints: latency-svc-sp48m [1.389187297s]
Feb 19 12:57:37.344: INFO: Created: latency-svc-cp7wr
Feb 19 12:57:37.359: INFO: Got endpoints: latency-svc-cp7wr [1.535547267s]
Feb 19 12:57:37.432: INFO: Created: latency-svc-xrsnv
Feb 19 12:57:37.643: INFO: Got endpoints: latency-svc-xrsnv [1.819312635s]
Feb 19 12:57:37.685: INFO: Created: latency-svc-vjfnv
Feb 19 12:57:37.724: INFO: Got endpoints: latency-svc-vjfnv [1.901146913s]
Feb 19 12:57:37.924: INFO: Created: latency-svc-7jwbn
Feb 19 12:57:37.960: INFO: Got endpoints: latency-svc-7jwbn [2.135500694s]
Feb 19 12:57:37.968: INFO: Created: latency-svc-52clb
Feb 19 12:57:40.053: INFO: Got endpoints: latency-svc-52clb [4.228672272s]
Feb 19 12:57:40.315: INFO: Created: latency-svc-f7bk7
Feb 19 12:57:40.361: INFO: Got endpoints: latency-svc-f7bk7 [4.535852328s]
Feb 19 12:57:40.611: INFO: Created: latency-svc-82cjp
Feb 19 12:57:40.619: INFO: Got endpoints: latency-svc-82cjp [4.566808178s]
Feb 19 12:57:40.814: INFO: Created: latency-svc-sjwg8
Feb 19 12:57:40.835: INFO: Got endpoints: latency-svc-sjwg8 [4.660997809s]
Feb 19 12:57:41.041: INFO: Created: latency-svc-cmbtt
Feb 19 12:57:41.051: INFO: Got endpoints: latency-svc-cmbtt [4.835309593s]
Feb 19 12:57:41.234: INFO: Created: latency-svc-8gz22
Feb 19 12:57:41.245: INFO: Got endpoints: latency-svc-8gz22 [4.747452192s]
Feb 19 12:57:41.524: INFO: Created: latency-svc-2dj9m
Feb 19 12:57:41.536: INFO: Got endpoints: latency-svc-2dj9m [4.963054201s]
Feb 19 12:57:41.761: INFO: Created: latency-svc-lbjtc
Feb 19 12:57:41.814: INFO: Got endpoints: latency-svc-lbjtc [5.057513608s]
Feb 19 12:57:42.038: INFO: Created: latency-svc-b88db
Feb 19 12:57:42.049: INFO: Got endpoints: latency-svc-b88db [5.084270923s]
Feb 19 12:57:42.216: INFO: Created: latency-svc-jnbrg
Feb 19 12:57:42.274: INFO: Got endpoints: latency-svc-jnbrg [5.241580419s]
Feb 19 12:57:42.298: INFO: Created: latency-svc-srxmb
Feb 19 12:57:42.436: INFO: Got endpoints: latency-svc-srxmb [5.222222178s]
Feb 19 12:57:42.491: INFO: Created: latency-svc-lzt8n
Feb 19 12:57:42.662: INFO: Got endpoints: latency-svc-lzt8n [5.302817763s]
Feb 19 12:57:42.739: INFO: Created: latency-svc-fwbnh
Feb 19 12:57:42.897: INFO: Got endpoints: latency-svc-fwbnh [5.253112517s]
Feb 19 12:57:42.950: INFO: Created: latency-svc-rlt74
Feb 19 12:57:42.955: INFO: Got endpoints: latency-svc-rlt74 [5.230622802s]
Feb 19 12:57:43.137: INFO: Created: latency-svc-t7cd7
Feb 19 12:57:43.149: INFO: Got endpoints: latency-svc-t7cd7 [5.189020695s]
Feb 19 12:57:43.218: INFO: Created: latency-svc-r8dvw
Feb 19 12:57:43.313: INFO: Got endpoints: latency-svc-r8dvw [3.260692955s]
Feb 19 12:57:43.352: INFO: Created: latency-svc-w4rcn
Feb 19 12:57:43.358: INFO: Got endpoints: latency-svc-w4rcn [2.996752426s]
Feb 19 12:57:43.551: INFO: Created: latency-svc-xgfk2
Feb 19 12:57:43.557: INFO: Got endpoints: latency-svc-xgfk2 [2.938563807s]
Feb 19 12:57:43.894: INFO: Created: latency-svc-bsk88
Feb 19 12:57:43.933: INFO: Got endpoints: latency-svc-bsk88 [3.097525793s]
Feb 19 12:57:44.198: INFO: Created: latency-svc-p5fhq
Feb 19 12:57:44.246: INFO: Got endpoints: latency-svc-p5fhq [3.194552342s]
Feb 19 12:57:44.459: INFO: Created: latency-svc-gzf6t
Feb 19 12:57:44.483: INFO: Got endpoints: latency-svc-gzf6t [3.238036577s]
Feb 19 12:57:44.908: INFO: Created: latency-svc-mrv5z
Feb 19 12:57:44.917: INFO: Got endpoints: latency-svc-mrv5z [3.380138868s]
Feb 19 12:57:45.056: INFO: Created: latency-svc-455rq
Feb 19 12:57:45.056: INFO: Got endpoints: latency-svc-455rq [3.241777445s]
Feb 19 12:57:46.033: INFO: Created: latency-svc-lxv5z
Feb 19 12:57:46.069: INFO: Got endpoints: latency-svc-lxv5z [4.018921168s]
Feb 19 12:57:46.277: INFO: Created: latency-svc-t55kf
Feb 19 12:57:46.457: INFO: Got endpoints: latency-svc-t55kf [4.182279608s]
Feb 19 12:57:46.747: INFO: Created: latency-svc-rfchj
Feb 19 12:57:46.854: INFO: Got endpoints: latency-svc-rfchj [4.417166253s]
Feb 19 12:57:46.906: INFO: Created: latency-svc-svrlz
Feb 19 12:57:46.939: INFO: Got endpoints: latency-svc-svrlz [4.276242464s]
Feb 19 12:57:47.177: INFO: Created: latency-svc-vjplw
Feb 19 12:57:47.291: INFO: Got endpoints: latency-svc-vjplw [4.394097541s]
Feb 19 12:57:47.324: INFO: Created: latency-svc-brgjt
Feb 19 12:57:47.331: INFO: Got endpoints: latency-svc-brgjt [4.375621759s]
Feb 19 12:57:47.493: INFO: Created: latency-svc-q8mr9
Feb 19 12:57:47.507: INFO: Got endpoints: latency-svc-q8mr9 [4.357340064s]
Feb 19 12:57:48.406: INFO: Created: latency-svc-p2rg4
Feb 19 12:57:48.447: INFO: Got endpoints: latency-svc-p2rg4 [5.133879129s]
Feb 19 12:57:48.887: INFO: Created: latency-svc-27s64
Feb 19 12:57:48.926: INFO: Got endpoints: latency-svc-27s64 [5.568302557s]
Feb 19 12:57:49.238: INFO: Created: latency-svc-bz7lv
Feb 19 12:57:49.273: INFO: Got endpoints: latency-svc-bz7lv [5.715755333s]
Feb 19 12:57:49.682: INFO: Created: latency-svc-jtl29
Feb 19 12:57:49.706: INFO: Got endpoints: latency-svc-jtl29 [5.77334697s]
Feb 19 12:57:49.881: INFO: Created: latency-svc-tck42
Feb 19 12:57:50.064: INFO: Got endpoints: latency-svc-tck42 [5.818174048s]
Feb 19 12:57:50.112: INFO: Created: latency-svc-v96rc
Feb 19 12:57:50.137: INFO: Got endpoints: latency-svc-v96rc [5.653348547s]
Feb 19 12:57:51.567: INFO: Created: latency-svc-h2vlp
Feb 19 12:57:51.648: INFO: Got endpoints: latency-svc-h2vlp [6.73090098s]
Feb 19 12:57:51.861: INFO: Created: latency-svc-hqt5d
Feb 19 12:57:52.048: INFO: Got endpoints: latency-svc-hqt5d [6.991636298s]
Feb 19 12:57:52.137: INFO: Created: latency-svc-fbj9f
Feb 19 12:57:52.260: INFO: Got endpoints: latency-svc-fbj9f [6.190803706s]
Feb 19 12:57:52.571: INFO: Created: latency-svc-lxz5z
Feb 19 12:57:52.621: INFO: Got endpoints: latency-svc-lxz5z [6.163714673s]
Feb 19 12:57:53.005: INFO: Created: latency-svc-xb5vr
Feb 19 12:57:53.155: INFO: Got endpoints: latency-svc-xb5vr [6.301446789s]
Feb 19 12:57:53.373: INFO: Created: latency-svc-g8cw9
Feb 19 12:57:53.519: INFO: Got endpoints: latency-svc-g8cw9 [6.580335315s]
Feb 19 12:57:53.543: INFO: Created: latency-svc-4vnvm
Feb 19 12:57:53.559: INFO: Got endpoints: latency-svc-4vnvm [6.267284955s]
Feb 19 12:57:53.888: INFO: Created: latency-svc-5m5fw
Feb 19 12:57:53.899: INFO: Got endpoints: latency-svc-5m5fw [6.567971791s]
Feb 19 12:57:54.322: INFO: Created: latency-svc-xpjj7
Feb 19 12:57:54.322: INFO: Got endpoints: latency-svc-xpjj7 [6.815057392s]
Feb 19 12:57:54.605: INFO: Created: latency-svc-2qqc2
Feb 19 12:57:54.967: INFO: Got endpoints: latency-svc-2qqc2 [6.519702555s]
Feb 19 12:57:55.074: INFO: Created: latency-svc-rkcdw
Feb 19 12:57:55.224: INFO: Got endpoints: latency-svc-rkcdw [6.29733981s]
Feb 19 12:57:55.252: INFO: Created: latency-svc-gdnfr
Feb 19 12:57:55.283: INFO: Got endpoints: latency-svc-gdnfr [6.009653117s]
Feb 19 12:57:55.521: INFO: Created: latency-svc-868zc
Feb 19 12:57:55.555: INFO: Got endpoints: latency-svc-868zc [5.848633703s]
Feb 19 12:57:55.783: INFO: Created: latency-svc-p8sd5
Feb 19 12:57:55.800: INFO: Got endpoints: latency-svc-p8sd5 [5.735584641s]
Feb 19 12:57:56.149: INFO: Created: latency-svc-fkf2t
Feb 19 12:57:56.166: INFO: Got endpoints: latency-svc-fkf2t [6.029076253s]
Feb 19 12:57:56.363: INFO: Created: latency-svc-vr65v
Feb 19 12:57:56.630: INFO: Got endpoints: latency-svc-vr65v [4.981823483s]
Feb 19 12:57:56.656: INFO: Created: latency-svc-hgh24
Feb 19 12:57:56.665: INFO: Got endpoints: latency-svc-hgh24 [4.616869551s]
Feb 19 12:57:56.913: INFO: Created: latency-svc-s7dsc
Feb 19 12:57:56.929: INFO: Got endpoints: latency-svc-s7dsc [4.668849723s]
Feb 19 12:57:57.296: INFO: Created: latency-svc-qz95t
Feb 19 12:57:57.322: INFO: Got endpoints: latency-svc-qz95t [4.700729391s]
Feb 19 12:57:57.554: INFO: Created: latency-svc-j5b9b
Feb 19 12:57:57.578: INFO: Got endpoints: latency-svc-j5b9b [4.420834234s]
Feb 19 12:57:57.749: INFO: Created: latency-svc-6sw59
Feb 19 12:57:57.764: INFO: Got endpoints: latency-svc-6sw59 [4.244496886s]
Feb 19 12:57:57.838: INFO: Created: latency-svc-9jw74
Feb 19 12:57:58.036: INFO: Got endpoints: latency-svc-9jw74 [4.477184769s]
Feb 19 12:57:58.071: INFO: Created: latency-svc-txrhg
Feb 19 12:57:58.112: INFO: Got endpoints: latency-svc-txrhg [4.212280544s]
Feb 19 12:57:58.310: INFO: Created: latency-svc-t8bl5
Feb 19 12:57:58.319: INFO: Got endpoints: latency-svc-t8bl5 [3.996674429s]
Feb 19 12:57:58.545: INFO: Created: latency-svc-6827r
Feb 19 12:57:58.609: INFO: Got endpoints: latency-svc-6827r [3.64112463s]
Feb 19 12:57:58.849: INFO: Created: latency-svc-jhwbn
Feb 19 12:57:58.881: INFO: Got endpoints: latency-svc-jhwbn [3.65720252s]
Feb 19 12:57:59.166: INFO: Created: latency-svc-n89kc
Feb 19 12:57:59.175: INFO: Got endpoints: latency-svc-n89kc [3.89173232s]
Feb 19 12:57:59.401: INFO: Created: latency-svc-4bbst
Feb 19 12:57:59.401: INFO: Got endpoints: latency-svc-4bbst [3.846165769s]
Feb 19 12:57:59.548: INFO: Created: latency-svc-d75mn
Feb 19 12:57:59.561: INFO: Got endpoints: latency-svc-d75mn [3.761070766s]
Feb 19 12:57:59.605: INFO: Created: latency-svc-glkqr
Feb 19 12:57:59.622: INFO: Got endpoints: latency-svc-glkqr [3.455919455s]
Feb 19 12:57:59.747: INFO: Created: latency-svc-qcp6b
Feb 19 12:57:59.768: INFO: Got endpoints: latency-svc-qcp6b [3.137927542s]
Feb 19 12:57:59.964: INFO: Created: latency-svc-gn2xt
Feb 19 12:58:00.009: INFO: Got endpoints: latency-svc-gn2xt [3.343400169s]
Feb 19 12:58:00.141: INFO: Created: latency-svc-j4fsr
Feb 19 12:58:00.166: INFO: Got endpoints: latency-svc-j4fsr [3.236646632s]
Feb 19 12:58:00.355: INFO: Created: latency-svc-vgvxv
Feb 19 12:58:00.355: INFO: Got endpoints: latency-svc-vgvxv [3.032501413s]
Feb 19 12:58:00.475: INFO: Created: latency-svc-27chq
Feb 19 12:58:00.549: INFO: Created: latency-svc-wvtd7
Feb 19 12:58:00.550: INFO: Got endpoints: latency-svc-27chq [2.97208159s]
Feb 19 12:58:00.736: INFO: Got endpoints: latency-svc-wvtd7 [2.971111865s]
Feb 19 12:58:00.822: INFO: Created: latency-svc-6gl7f
Feb 19 12:58:00.956: INFO: Got endpoints: latency-svc-6gl7f [2.919626168s]
Feb 19 12:58:00.986: INFO: Created: latency-svc-wrhbx
Feb 19 12:58:01.016: INFO: Got endpoints: latency-svc-wrhbx [2.903819271s]
Feb 19 12:58:01.237: INFO: Created: latency-svc-qljm4
Feb 19 12:58:01.298: INFO: Created: latency-svc-4bhkb
Feb 19 12:58:01.309: INFO: Got endpoints: latency-svc-qljm4 [2.990132219s]
Feb 19 12:58:01.505: INFO: Got endpoints: latency-svc-4bhkb [2.89633544s]
Feb 19 12:58:01.587: INFO: Created: latency-svc-8nbcs
Feb 19 12:58:01.804: INFO: Got endpoints: latency-svc-8nbcs [2.921986779s]
Feb 19 12:58:01.828: INFO: Created: latency-svc-qd66c
Feb 19 12:58:01.854: INFO: Got endpoints: latency-svc-qd66c [2.678681231s]
Feb 19 12:58:02.148: INFO: Created: latency-svc-djcpv
Feb 19 12:58:02.348: INFO: Got endpoints: latency-svc-djcpv [2.946450768s]
Feb 19 12:58:02.367: INFO: Created: latency-svc-j6jh8
Feb 19 12:58:02.411: INFO: Got endpoints: latency-svc-j6jh8 [2.849867522s]
Feb 19 12:58:02.448: INFO: Created: latency-svc-gsnn5
Feb 19 12:58:02.592: INFO: Got endpoints: latency-svc-gsnn5 [2.969898444s]
Feb 19 12:58:02.757: INFO: Created: latency-svc-8lnj8
Feb 19 12:58:02.969: INFO: Got endpoints: latency-svc-8lnj8 [3.200897945s]
Feb 19 12:58:03.301: INFO: Created: latency-svc-brrk9
Feb 19 12:58:03.569: INFO: Created: latency-svc-97ssd
Feb 19 12:58:03.582: INFO: Got endpoints: latency-svc-brrk9 [3.570661835s]
Feb 19 12:58:03.905: INFO: Got endpoints: latency-svc-97ssd [3.739206782s]
Feb 19 12:58:04.135: INFO: Created: latency-svc-97tc2
Feb 19 12:58:04.362: INFO: Got endpoints: latency-svc-97tc2 [4.007195485s]
Feb 19 12:58:04.379: INFO: Created: latency-svc-xr5cv
Feb 19 12:58:04.391: INFO: Got endpoints: latency-svc-xr5cv [3.8409063s]
Feb 19 12:58:04.439: INFO: Created: latency-svc-zmw9x
Feb 19 12:58:04.452: INFO: Got endpoints: latency-svc-zmw9x [3.716297309s]
Feb 19 12:58:04.629: INFO: Created: latency-svc-6bfb8
Feb 19 12:58:04.638: INFO: Got endpoints: latency-svc-6bfb8 [3.681640062s]
Feb 19 12:58:04.892: INFO: Created: latency-svc-7rgmp
Feb 19 12:58:04.901: INFO: Got endpoints: latency-svc-7rgmp [3.885195494s]
Feb 19 12:58:04.972: INFO: Created: latency-svc-hl4nf
Feb 19 12:58:05.192: INFO: Got endpoints: latency-svc-hl4nf [3.882664306s]
Feb 19 12:58:05.239: INFO: Created: latency-svc-8jvwc
Feb 19 12:58:05.256: INFO: Got endpoints: latency-svc-8jvwc [3.750611298s]
Feb 19 12:58:05.408: INFO: Created: latency-svc-xx2bx
Feb 19 12:58:05.450: INFO: Got endpoints: latency-svc-xx2bx [3.645989152s]
Feb 19 12:58:05.613: INFO: Created: latency-svc-ssfh4
Feb 19 12:58:05.647: INFO: Got endpoints: latency-svc-ssfh4 [3.793006667s]
Feb 19 12:58:05.696: INFO: Created: latency-svc-jvwv4
Feb 19 12:58:05.814: INFO: Got endpoints: latency-svc-jvwv4 [3.465650652s]
Feb 19 12:58:05.863: INFO: Created: latency-svc-f4l2w
Feb 19 12:58:06.038: INFO: Got endpoints: latency-svc-f4l2w [3.626486729s]
Feb 19 12:58:06.083: INFO: Created: latency-svc-gjxbp
Feb 19 12:58:06.108: INFO: Got endpoints: latency-svc-gjxbp [3.515615194s]
Feb 19 12:58:06.248: INFO: Created: latency-svc-drrhg
Feb 19 12:58:06.262: INFO: Got endpoints: latency-svc-drrhg [3.292223955s]
Feb 19 12:58:06.326: INFO: Created: latency-svc-j9j6k
Feb 19 12:58:06.507: INFO: Got endpoints: latency-svc-j9j6k [2.925424544s]
Feb 19 12:58:06.526: INFO: Created: latency-svc-xxphx
Feb 19 12:58:06.555: INFO: Got endpoints: latency-svc-xxphx [2.649466521s]
Feb 19 12:58:06.760: INFO: Created: latency-svc-bj46c
Feb 19 12:58:06.797: INFO: Got endpoints: latency-svc-bj46c [2.434819736s]
Feb 19 12:58:06.983: INFO: Created: latency-svc-wpntk
Feb 19 12:58:07.007: INFO: Got endpoints: latency-svc-wpntk [2.615596124s]
Feb 19 12:58:07.220: INFO: Created: latency-svc-7xx9j
Feb 19 12:58:07.247: INFO: Got endpoints: latency-svc-7xx9j [2.794222738s]
Feb 19 12:58:07.295: INFO: Created: latency-svc-wqhbr
Feb 19 12:58:07.407: INFO: Got endpoints: latency-svc-wqhbr [2.768304632s]
Feb 19 12:58:07.434: INFO: Created: latency-svc-ds4ht
Feb 19 12:58:07.666: INFO: Got endpoints: latency-svc-ds4ht [418.515835ms]
Feb 19 12:58:07.714: INFO: Created: latency-svc-k58jb
Feb 19 12:58:07.859: INFO: Got endpoints: latency-svc-k58jb [2.957843838s]
Feb 19 12:58:07.899: INFO: Created: latency-svc-mm2sl
Feb 19 12:58:07.916: INFO: Got endpoints: latency-svc-mm2sl [2.723737659s]
Feb 19 12:58:08.252: INFO: Created: latency-svc-mnz6t
Feb 19 12:58:08.382: INFO: Got endpoints: latency-svc-mnz6t [3.125881378s]
Feb 19 12:58:08.404: INFO: Created: latency-svc-d8ghn
Feb 19 12:58:08.417: INFO: Got endpoints: latency-svc-d8ghn [2.967205637s]
Feb 19 12:58:08.616: INFO: Created: latency-svc-846p8
Feb 19 12:58:08.644: INFO: Got endpoints: latency-svc-846p8 [2.996243702s]
Feb 19 12:58:08.702: INFO: Created: latency-svc-kh62z
Feb 19 12:58:08.827: INFO: Got endpoints: latency-svc-kh62z [3.012548092s]
Feb 19 12:58:08.875: INFO: Created: latency-svc-jmmbz
Feb 19 12:58:08.905: INFO: Got endpoints: latency-svc-jmmbz [2.865878956s]
Feb 19 12:58:09.223: INFO: Created: latency-svc-47cxq
Feb 19 12:58:09.262: INFO: Got endpoints: latency-svc-47cxq [3.153049044s]
Feb 19 12:58:09.438: INFO: Created: latency-svc-tzjc4
Feb 19 12:58:09.447: INFO: Got endpoints: latency-svc-tzjc4 [3.185104832s]
Feb 19 12:58:09.711: INFO: Created: latency-svc-rkkqr
Feb 19 12:58:09.720: INFO: Got endpoints: latency-svc-rkkqr [3.212136867s]
Feb 19 12:58:09.796: INFO: Created: latency-svc-h6v54
Feb 19 12:58:09.891: INFO: Got endpoints: latency-svc-h6v54 [3.335799412s]
Feb 19 12:58:09.945: INFO: Created: latency-svc-m9fxz
Feb 19 12:58:09.955: INFO: Got endpoints: latency-svc-m9fxz [3.157015131s]
Feb 19 12:58:10.146: INFO: Created: latency-svc-j92z6
Feb 19 12:58:10.167: INFO: Got endpoints: latency-svc-j92z6 [3.159535019s]
Feb 19 12:58:10.204: INFO: Created: latency-svc-7hnn9
Feb 19 12:58:10.227: INFO: Got endpoints: latency-svc-7hnn9 [2.81952429s]
Feb 19 12:58:10.334: INFO: Created: latency-svc-tsxln
Feb 19 12:58:10.350: INFO: Got endpoints: latency-svc-tsxln [2.682912717s]
Feb 19 12:58:10.397: INFO: Created: latency-svc-22dwv
Feb 19 12:58:10.532: INFO: Got endpoints: latency-svc-22dwv [2.672250427s]
Feb 19 12:58:10.567: INFO: Created: latency-svc-76rsx
Feb 19 12:58:10.701: INFO: Got endpoints: latency-svc-76rsx [2.784878317s]
Feb 19 12:58:10.753: INFO: Created: latency-svc-6khxj
Feb 19 12:58:10.764: INFO: Got endpoints: latency-svc-6khxj [2.381273216s]
Feb 19 12:58:10.916: INFO: Created: latency-svc-drm76
Feb 19 12:58:10.953: INFO: Got endpoints: latency-svc-drm76 [2.535161647s]
Feb 19 12:58:10.964: INFO: Created: latency-svc-6mx2d
Feb 19 12:58:10.975: INFO: Got endpoints: latency-svc-6mx2d [2.330208744s]
Feb 19 12:58:11.179: INFO: Created: latency-svc-frnff
Feb 19 12:58:11.244: INFO: Got endpoints: latency-svc-frnff [2.416679984s]
Feb 19 12:58:11.419: INFO: Created: latency-svc-rkc4c
Feb 19 12:58:11.573: INFO: Got endpoints: latency-svc-rkc4c [2.668236564s]
Feb 19 12:58:11.595: INFO: Created: latency-svc-jf92p
Feb 19 12:58:11.661: INFO: Got endpoints: latency-svc-jf92p [2.399028955s]
Feb 19 12:58:11.796: INFO: Created: latency-svc-j7vbf
Feb 19 12:58:11.823: INFO: Got endpoints: latency-svc-j7vbf [2.375261856s]
Feb 19 12:58:11.891: INFO: Created: latency-svc-k7mgf
Feb 19 12:58:11.997: INFO: Got endpoints: latency-svc-k7mgf [2.277219439s]
Feb 19 12:58:12.024: INFO: Created: latency-svc-pcbnn
Feb 19 12:58:12.051: INFO: Got endpoints: latency-svc-pcbnn [2.15985418s]
Feb 19 12:58:12.221: INFO: Created: latency-svc-wz64m
Feb 19 12:58:12.225: INFO: Got endpoints: latency-svc-wz64m [2.269492383s]
Feb 19 12:58:12.427: INFO: Created: latency-svc-knpzw
Feb 19 12:58:12.461: INFO: Got endpoints: latency-svc-knpzw [2.293203974s]
Feb 19 12:58:12.916: INFO: Created: latency-svc-kvm7d
Feb 19 12:58:12.947: INFO: Got endpoints: latency-svc-kvm7d [2.719821469s]
Feb 19 12:58:13.084: INFO: Created: latency-svc-2x2z5
Feb 19 12:58:13.145: INFO: Got endpoints: latency-svc-2x2z5 [2.795626614s]
Feb 19 12:58:13.449: INFO: Created: latency-svc-969dx
Feb 19 12:58:13.463: INFO: Created: latency-svc-6cm6j
Feb 19 12:58:13.465: INFO: Got endpoints: latency-svc-969dx [2.933405982s]
Feb 19 12:58:13.473: INFO: Got endpoints: latency-svc-6cm6j [2.771257686s]
Feb 19 12:58:13.533: INFO: Created: latency-svc-h9wmf
Feb 19 12:58:13.643: INFO: Got endpoints: latency-svc-h9wmf [2.879115191s]
Feb 19 12:58:13.714: INFO: Created: latency-svc-ggktd
Feb 19 12:58:13.991: INFO: Got endpoints: latency-svc-ggktd [3.038020474s]
Feb 19 12:58:14.376: INFO: Created: latency-svc-9lrkx
Feb 19 12:58:14.664: INFO: Got endpoints: latency-svc-9lrkx [3.688940964s]
Feb 19 12:58:14.696: INFO: Created: latency-svc-srptn
Feb 19 12:58:14.710: INFO: Got endpoints: latency-svc-srptn [3.465920948s]
Feb 19 12:58:14.947: INFO: Created: latency-svc-b2wlv
Feb 19 12:58:15.003: INFO: Got endpoints: latency-svc-b2wlv [3.429490226s]
Feb 19 12:58:15.009: INFO: Created: latency-svc-tvh8h
Feb 19 12:58:15.265: INFO: Got endpoints: latency-svc-tvh8h [3.603198413s]
Feb 19 12:58:15.294: INFO: Created: latency-svc-kkhqf
Feb 19 12:58:15.315: INFO: Got endpoints: latency-svc-kkhqf [3.492112157s]
Feb 19 12:58:15.535: INFO: Created: latency-svc-jc4rq
Feb 19 12:58:15.548: INFO: Got endpoints: latency-svc-jc4rq [3.550637048s]
Feb 19 12:58:15.616: INFO: Created: latency-svc-77pr6
Feb 19 12:58:16.385: INFO: Got endpoints: latency-svc-77pr6 [4.333206058s]
Feb 19 12:58:16.492: INFO: Created: latency-svc-hq8jk
Feb 19 12:58:17.128: INFO: Got endpoints: latency-svc-hq8jk [4.903715654s]
Feb 19 12:58:17.353: INFO: Created: latency-svc-qx8hs
Feb 19 12:58:17.364: INFO: Created: latency-svc-lv64d
Feb 19 12:58:17.384: INFO: Got endpoints: latency-svc-lv64d [4.436303277s]
Feb 19 12:58:17.384: INFO: Got endpoints: latency-svc-qx8hs [4.922505067s]
Feb 19 12:58:17.606: INFO: Created: latency-svc-g2hnt
Feb 19 12:58:17.654: INFO: Got endpoints: latency-svc-g2hnt [4.508756591s]
Feb 19 12:58:17.832: INFO: Created: latency-svc-pxtpm
Feb 19 12:58:17.854: INFO: Got endpoints: latency-svc-pxtpm [4.388054893s]
Feb 19 12:58:18.013: INFO: Created: latency-svc-n8b7z
Feb 19 12:58:18.057: INFO: Got endpoints: latency-svc-n8b7z [4.58379193s]
Feb 19 12:58:18.278: INFO: Created: latency-svc-l5d8x
Feb 19 12:58:18.313: INFO: Got endpoints: latency-svc-l5d8x [4.669917469s]
Feb 19 12:58:18.499: INFO: Created: latency-svc-bccbc
Feb 19 12:58:18.508: INFO: Got endpoints: latency-svc-bccbc [4.517111034s]
Feb 19 12:58:18.672: INFO: Created: latency-svc-wwl56
Feb 19 12:58:18.688: INFO: Got endpoints: latency-svc-wwl56 [4.023968898s]
Feb 19 12:58:18.761: INFO: Created: latency-svc-bz5nz
Feb 19 12:58:18.915: INFO: Got endpoints: latency-svc-bz5nz [4.204657847s]
Feb 19 12:58:18.943: INFO: Created: latency-svc-q5gqn
Feb 19 12:58:18.993: INFO: Got endpoints: latency-svc-q5gqn [3.989504082s]
Feb 19 12:58:19.290: INFO: Created: latency-svc-nvvlb
Feb 19 12:58:19.311: INFO: Got endpoints: latency-svc-nvvlb [4.045758369s]
Feb 19 12:58:19.467: INFO: Created: latency-svc-4ljlh
Feb 19 12:58:19.488: INFO: Got endpoints: latency-svc-4ljlh [4.172521974s]
Feb 19 12:58:19.560: INFO: Created: latency-svc-lfzqb
Feb 19 12:58:19.712: INFO: Got endpoints: latency-svc-lfzqb [4.163538585s]
Feb 19 12:58:19.736: INFO: Created: latency-svc-nbjft
Feb 19 12:58:19.756: INFO: Got endpoints: latency-svc-nbjft [3.370762606s]
Feb 19 12:58:19.933: INFO: Created: latency-svc-qczxt
Feb 19 12:58:19.951: INFO: Got endpoints: latency-svc-qczxt [2.822792322s]
Feb 19 12:58:20.029: INFO: Created: latency-svc-hnh8k
Feb 19 12:58:20.233: INFO: Got endpoints: latency-svc-hnh8k [2.849473777s]
Feb 19 12:58:20.242: INFO: Created: latency-svc-qgn47
Feb 19 12:58:20.261: INFO: Got endpoints: latency-svc-qgn47 [2.876493714s]
Feb 19 12:58:20.484: INFO: Created: latency-svc-lhp9w
Feb 19 12:58:20.513: INFO: Got endpoints: latency-svc-lhp9w [2.858026162s]
Feb 19 12:58:20.820: INFO: Created: latency-svc-j8jfk
Feb 19 12:58:20.889: INFO: Got endpoints: latency-svc-j8jfk [3.034524779s]
Feb 19 12:58:21.873: INFO: Created: latency-svc-4jqqr
Feb 19 12:58:21.873: INFO: Got endpoints: latency-svc-4jqqr [3.81632894s]
Feb 19 12:58:22.011: INFO: Created: latency-svc-sxdr5
Feb 19 12:58:22.031: INFO: Got endpoints: latency-svc-sxdr5 [3.717261313s]
Feb 19 12:58:22.191: INFO: Created: latency-svc-r5w95
Feb 19 12:58:22.216: INFO: Got endpoints: latency-svc-r5w95 [3.707606677s]
Feb 19 12:58:22.460: INFO: Created: latency-svc-ghp9s
Feb 19 12:58:22.466: INFO: Got endpoints: latency-svc-ghp9s [3.778260854s]
Feb 19 12:58:22.702: INFO: Created: latency-svc-wp822
Feb 19 12:58:22.712: INFO: Got endpoints: latency-svc-wp822 [3.79735564s]
Feb 19 12:58:23.314: INFO: Created: latency-svc-4npkc
Feb 19 12:58:23.428: INFO: Got endpoints: latency-svc-4npkc [4.433220795s]
Feb 19 12:58:23.462: INFO: Created: latency-svc-flm2m
Feb 19 12:58:23.506: INFO: Got endpoints: latency-svc-flm2m [4.194586204s]
Feb 19 12:58:23.676: INFO: Created: latency-svc-b7x5b
Feb 19 12:58:23.899: INFO: Got endpoints: latency-svc-b7x5b [4.410896586s]
Feb 19 12:58:23.907: INFO: Created: latency-svc-wqk4f
Feb 19 12:58:23.924: INFO: Got endpoints: latency-svc-wqk4f [4.212240328s]
Feb 19 12:58:24.005: INFO: Created: latency-svc-rcw4n
Feb 19 12:58:24.368: INFO: Got endpoints: latency-svc-rcw4n [4.612058112s]
Feb 19 12:58:24.404: INFO: Created: latency-svc-lbkth
Feb 19 12:58:24.424: INFO: Got endpoints: latency-svc-lbkth [4.472854714s]
Feb 19 12:58:24.605: INFO: Created: latency-svc-csv87
Feb 19 12:58:24.621: INFO: Got endpoints: latency-svc-csv87 [4.387058615s]
Feb 19 12:58:24.850: INFO: Created: latency-svc-9fqmm
Feb 19 12:58:24.907: INFO: Got endpoints: latency-svc-9fqmm [4.646581949s]
Feb 19 12:58:25.223: INFO: Created: latency-svc-d5pwv
Feb 19 12:58:25.262: INFO: Got endpoints: latency-svc-d5pwv [4.748570085s]
Feb 19 12:58:25.491: INFO: Created: latency-svc-n94qf
Feb 19 12:58:25.516: INFO: Got endpoints: latency-svc-n94qf [4.626029767s]
Feb 19 12:58:25.668: INFO: Created: latency-svc-l8wm2
Feb 19 12:58:25.684: INFO: Got endpoints: latency-svc-l8wm2 [3.810608847s]
Feb 19 12:58:25.737: INFO: Created: latency-svc-qrdft
Feb 19 12:58:25.839: INFO: Got endpoints: latency-svc-qrdft [3.807845369s]
Feb 19 12:58:25.859: INFO: Created: latency-svc-hdqmf
Feb 19 12:58:25.912: INFO: Got endpoints: latency-svc-hdqmf [3.695162128s]
Feb 19 12:58:26.046: INFO: Created: latency-svc-zmll7
Feb 19 12:58:26.046: INFO: Got endpoints: latency-svc-zmll7 [3.57961484s]
Feb 19 12:58:26.082: INFO: Created: latency-svc-tfv69
Feb 19 12:58:26.191: INFO: Got endpoints: latency-svc-tfv69 [3.478755203s]
Feb 19 12:58:26.218: INFO: Created: latency-svc-s8svj
Feb 19 12:58:26.257: INFO: Created: latency-svc-blkjl
Feb 19 12:58:26.265: INFO: Got endpoints: latency-svc-s8svj [2.837587535s]
Feb 19 12:58:26.269: INFO: Got endpoints: latency-svc-blkjl [2.762963681s]
Feb 19 12:58:26.269: INFO: Latencies: [231.6996ms 350.42294ms 396.110394ms 418.515835ms 675.102157ms 748.561947ms 932.767094ms 1.141410074s 1.208152312s 1.389187297s 1.535547267s 1.819312635s 1.901146913s 2.135500694s 2.15985418s 2.269492383s 2.277219439s 2.293203974s 2.330208744s 2.375261856s 2.381273216s 2.399028955s 2.416679984s 2.434819736s 2.535161647s 2.615596124s 2.649466521s 2.668236564s 2.672250427s 2.678681231s 2.682912717s 2.719821469s 2.723737659s 2.762963681s 2.768304632s 2.771257686s 2.784878317s 2.794222738s 2.795626614s 2.81952429s 2.822792322s 2.837587535s 2.849473777s 2.849867522s 2.858026162s 2.865878956s 2.876493714s 2.879115191s 2.89633544s 2.903819271s 2.919626168s 2.921986779s 2.925424544s 2.933405982s 2.938563807s 2.946450768s 2.957843838s 2.967205637s 2.969898444s 2.971111865s 2.97208159s 2.990132219s 2.996243702s 2.996752426s 3.012548092s 3.032501413s 3.034524779s 3.038020474s 3.097525793s 3.125881378s 3.137927542s 3.153049044s 3.157015131s 3.159535019s 3.185104832s 3.194552342s 3.200897945s 3.212136867s 3.236646632s 3.238036577s 3.241777445s 3.260692955s 3.292223955s 3.335799412s 3.343400169s 3.370762606s 3.380138868s 3.429490226s 3.455919455s 3.465650652s 3.465920948s 3.478755203s 3.492112157s 3.515615194s 3.550637048s 3.570661835s 3.57961484s 3.603198413s 3.626486729s 3.64112463s 3.645989152s 3.65720252s 3.681640062s 3.688940964s 3.695162128s 3.707606677s 3.716297309s 3.717261313s 3.739206782s 3.750611298s 3.761070766s 3.778260854s 3.793006667s 3.79735564s 3.807845369s 3.810608847s 3.81632894s 3.8409063s 3.846165769s 3.882664306s 3.885195494s 3.89173232s 3.989504082s 3.996674429s 4.007195485s 4.018921168s 4.023968898s 4.045758369s 4.163538585s 4.172521974s 4.182279608s 4.194586204s 4.204657847s 4.212240328s 4.212280544s 4.228672272s 4.244496886s 4.276242464s 4.333206058s 4.357340064s 4.375621759s 4.387058615s 4.388054893s 4.394097541s 4.410896586s 4.417166253s 4.420834234s 4.433220795s 4.436303277s 4.472854714s 4.477184769s 4.508756591s 4.517111034s 4.535852328s 4.566808178s 4.58379193s 4.612058112s 4.616869551s 4.626029767s 4.646581949s 4.660997809s 4.668849723s 4.669917469s 4.700729391s 4.747452192s 4.748570085s 4.835309593s 4.903715654s 4.922505067s 4.963054201s 4.981823483s 5.057513608s 5.084270923s 5.133879129s 5.189020695s 5.222222178s 5.230622802s 5.241580419s 5.253112517s 5.302817763s 5.568302557s 5.653348547s 5.715755333s 5.735584641s 5.77334697s 5.818174048s 5.848633703s 6.009653117s 6.029076253s 6.163714673s 6.190803706s 6.267284955s 6.29733981s 6.301446789s 6.519702555s 6.567971791s 6.580335315s 6.73090098s 6.815057392s 6.991636298s]
Feb 19 12:58:26.269: INFO: 50 %ile: 3.645989152s
Feb 19 12:58:26.269: INFO: 90 %ile: 5.568302557s
Feb 19 12:58:26.269: INFO: 99 %ile: 6.815057392s
Feb 19 12:58:26.269: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 12:58:26.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-rjwh6" for this suite.
Feb 19 12:59:56.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 12:59:56.477: INFO: namespace: e2e-tests-svc-latency-rjwh6, resource: bindings, ignored listing per whitelist
Feb 19 12:59:56.618: INFO: namespace e2e-tests-svc-latency-rjwh6 deletion completed in 1m30.339838263s

• [SLOW TEST:152.359 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 12:59:56.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb 19 12:59:56.839: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-nthcj" to be "success or failure"
Feb 19 12:59:56.876: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 36.659412ms
Feb 19 12:59:58.905: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066235784s
Feb 19 13:00:00.937: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098025429s
Feb 19 13:00:02.948: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108771227s
Feb 19 13:00:05.413: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.573646118s
Feb 19 13:00:07.435: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.595945006s
Feb 19 13:00:09.455: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.616150659s
Feb 19 13:00:11.468: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.629168163s
Feb 19 13:00:13.750: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.911254738s
STEP: Saw pod success
Feb 19 13:00:13.750: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 19 13:00:13.770: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 19 13:00:14.127: INFO: Waiting for pod pod-host-path-test to disappear
Feb 19 13:00:14.136: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:00:14.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-nthcj" for this suite.
Feb 19 13:00:20.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:00:20.357: INFO: namespace: e2e-tests-hostpath-nthcj, resource: bindings, ignored listing per whitelist
Feb 19 13:00:20.493: INFO: namespace e2e-tests-hostpath-nthcj deletion completed in 6.353381813s

• [SLOW TEST:23.875 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:00:20.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Feb 19 13:00:21.322: INFO: created pod pod-service-account-defaultsa
Feb 19 13:00:21.322: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 19 13:00:21.347: INFO: created pod pod-service-account-mountsa
Feb 19 13:00:21.347: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 19 13:00:21.384: INFO: created pod pod-service-account-nomountsa
Feb 19 13:00:21.384: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 19 13:00:21.604: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 19 13:00:21.604: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 19 13:00:21.671: INFO: created pod pod-service-account-mountsa-mountspec
Feb 19 13:00:21.671: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 19 13:00:21.877: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 19 13:00:21.877: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 19 13:00:21.971: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 19 13:00:21.971: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 19 13:00:22.051: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 19 13:00:22.051: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 19 13:00:22.100: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 19 13:00:22.100: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:00:22.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-m9597" for this suite.
Feb 19 13:00:52.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:00:54.602: INFO: namespace: e2e-tests-svcaccounts-m9597, resource: bindings, ignored listing per whitelist
Feb 19 13:00:54.653: INFO: namespace e2e-tests-svcaccounts-m9597 deletion completed in 32.493968757s

• [SLOW TEST:34.157 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:00:54.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-dd6ba2e1-5317-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 19 13:00:55.427: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dd6ffb19-5317-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-pvk6x" to be "success or failure"
Feb 19 13:00:55.470: INFO: Pod "pod-projected-configmaps-dd6ffb19-5317-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 43.224167ms
Feb 19 13:00:57.908: INFO: Pod "pod-projected-configmaps-dd6ffb19-5317-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.480937563s
Feb 19 13:00:59.928: INFO: Pod "pod-projected-configmaps-dd6ffb19-5317-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.501057745s
Feb 19 13:01:02.185: INFO: Pod "pod-projected-configmaps-dd6ffb19-5317-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.757848006s
Feb 19 13:01:06.222: INFO: Pod "pod-projected-configmaps-dd6ffb19-5317-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.795268679s
Feb 19 13:01:08.239: INFO: Pod "pod-projected-configmaps-dd6ffb19-5317-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.811942762s
Feb 19 13:01:10.251: INFO: Pod "pod-projected-configmaps-dd6ffb19-5317-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.823842366s
STEP: Saw pod success
Feb 19 13:01:10.251: INFO: Pod "pod-projected-configmaps-dd6ffb19-5317-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 13:01:10.254: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-dd6ffb19-5317-11ea-a0a3-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 19 13:01:11.251: INFO: Waiting for pod pod-projected-configmaps-dd6ffb19-5317-11ea-a0a3-0242ac110008 to disappear
Feb 19 13:01:11.761: INFO: Pod pod-projected-configmaps-dd6ffb19-5317-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:01:11.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pvk6x" for this suite.
Feb 19 13:01:17.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:01:18.063: INFO: namespace: e2e-tests-projected-pvk6x, resource: bindings, ignored listing per whitelist
Feb 19 13:01:18.216: INFO: namespace e2e-tests-projected-pvk6x deletion completed in 6.419768166s

• [SLOW TEST:23.563 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:01:18.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 19 13:01:18.372: INFO: Creating deployment "nginx-deployment"
Feb 19 13:01:18.382: INFO: Waiting for observed generation 1
Feb 19 13:01:20.608: INFO: Waiting for all required pods to come up
Feb 19 13:01:20.702: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 19 13:02:15.004: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 19 13:02:15.016: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 19 13:02:15.031: INFO: Updating deployment nginx-deployment
Feb 19 13:02:15.031: INFO: Waiting for observed generation 2
Feb 19 13:02:18.502: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 19 13:02:18.896: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 19 13:02:19.006: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 19 13:02:19.063: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 19 13:02:19.063: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 19 13:02:19.068: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 19 13:02:19.076: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 19 13:02:19.076: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 19 13:02:19.625: INFO: Updating deployment nginx-deployment
Feb 19 13:02:19.626: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 19 13:02:19.680: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 19 13:02:24.064: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 19 13:02:24.905: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-88z4s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-88z4s/deployments/nginx-deployment,UID:eb354d75-5317-11ea-a994-fa163e34d433,ResourceVersion:22204982,Generation:3,CreationTimestamp:2020-02-19 13:01:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-19 13:02:15 +0000 UTC 2020-02-19 13:01:18 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-19 13:02:20 +0000 UTC 2020-02-19 13:02:20 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 19 13:02:25.128: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-88z4s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-88z4s/replicasets/nginx-deployment-5c98f8fb5,UID:0cfbd04e-5318-11ea-a994-fa163e34d433,ResourceVersion:22205036,Generation:3,CreationTimestamp:2020-02-19 13:02:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment eb354d75-5317-11ea-a994-fa163e34d433 0xc000f7b7d7 0xc000f7b7d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 19 13:02:25.128: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 19 13:02:25.129: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-88z4s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-88z4s/replicasets/nginx-deployment-85ddf47c5d,UID:eb39173e-5317-11ea-a994-fa163e34d433,ResourceVersion:22205031,Generation:3,CreationTimestamp:2020-02-19 13:01:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment eb354d75-5317-11ea-a994-fa163e34d433 0xc000f7b8e7 0xc000f7b8e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 19 13:02:25.386: INFO: Pod "nginx-deployment-5c98f8fb5-5dcvn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5dcvn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-5c98f8fb5-5dcvn,UID:11e048a0-5318-11ea-a994-fa163e34d433,ResourceVersion:22205022,Generation:0,CreationTimestamp:2020-02-19 13:02:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0cfbd04e-5318-11ea-a994-fa163e34d433 0xc0010ddef0 0xc0010ddef1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0010ddfc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0006b2060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.387: INFO: Pod "nginx-deployment-5c98f8fb5-742mt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-742mt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-5c98f8fb5-742mt,UID:0d5df6b7-5318-11ea-a994-fa163e34d433,ResourceVersion:22204994,Generation:0,CreationTimestamp:2020-02-19 13:02:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0cfbd04e-5318-11ea-a994-fa163e34d433 0xc0006b2147 0xc0006b2148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0006b21c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0006b2210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-19 13:02:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.388: INFO: Pod "nginx-deployment-5c98f8fb5-7zxs2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7zxs2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-5c98f8fb5-7zxs2,UID:0d04cc9d-5318-11ea-a994-fa163e34d433,ResourceVersion:22204968,Generation:0,CreationTimestamp:2020-02-19 13:02:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0cfbd04e-5318-11ea-a994-fa163e34d433 0xc0006b3877 0xc0006b3878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0006b38f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0006b3990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-19 13:02:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.389: INFO: Pod "nginx-deployment-5c98f8fb5-bxf88" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bxf88,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-5c98f8fb5-bxf88,UID:1064b3a4-5318-11ea-a994-fa163e34d433,ResourceVersion:22205039,Generation:0,CreationTimestamp:2020-02-19 13:02:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0cfbd04e-5318-11ea-a994-fa163e34d433 0xc0006b3aa7 0xc0006b3aa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0006b3b10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0006b3b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-19 13:02:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.389: INFO: Pod "nginx-deployment-5c98f8fb5-k5l67" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-k5l67,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-5c98f8fb5-k5l67,UID:11e03c23-5318-11ea-a994-fa163e34d433,ResourceVersion:22205016,Generation:0,CreationTimestamp:2020-02-19 13:02:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0cfbd04e-5318-11ea-a994-fa163e34d433 0xc0006b3c57 0xc0006b3c58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0006b3cc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0006b3ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.389: INFO: Pod "nginx-deployment-5c98f8fb5-lhtsl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lhtsl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-5c98f8fb5-lhtsl,UID:108acc7a-5318-11ea-a994-fa163e34d433,ResourceVersion:22204996,Generation:0,CreationTimestamp:2020-02-19 13:02:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0cfbd04e-5318-11ea-a994-fa163e34d433 0xc0006b3d57 0xc0006b3d58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0006b3dd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0006b3df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.390: INFO: Pod "nginx-deployment-5c98f8fb5-m9xhh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-m9xhh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-5c98f8fb5-m9xhh,UID:126bad90-5318-11ea-a994-fa163e34d433,ResourceVersion:22205035,Generation:0,CreationTimestamp:2020-02-19 13:02:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0cfbd04e-5318-11ea-a994-fa163e34d433 0xc0006b3e77 0xc0006b3e78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0006b3ee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0006b3f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.390: INFO: Pod "nginx-deployment-5c98f8fb5-pzvjp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pzvjp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-5c98f8fb5-pzvjp,UID:0d5326b5-5318-11ea-a994-fa163e34d433,ResourceVersion:22204979,Generation:0,CreationTimestamp:2020-02-19 13:02:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0cfbd04e-5318-11ea-a994-fa163e34d433 0xc0006b3f87 0xc0006b3f88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017e6070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017e60a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-19 13:02:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.391: INFO: Pod "nginx-deployment-5c98f8fb5-qkkcd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qkkcd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-5c98f8fb5-qkkcd,UID:11dfbd6b-5318-11ea-a994-fa163e34d433,ResourceVersion:22205018,Generation:0,CreationTimestamp:2020-02-19 13:02:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0cfbd04e-5318-11ea-a994-fa163e34d433 0xc0017e62f7 0xc0017e62f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017e6410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017e6460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.391: INFO: Pod "nginx-deployment-5c98f8fb5-slkhq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-slkhq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-5c98f8fb5-slkhq,UID:0d20b59d-5318-11ea-a994-fa163e34d433,ResourceVersion:22204973,Generation:0,CreationTimestamp:2020-02-19 13:02:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0cfbd04e-5318-11ea-a994-fa163e34d433 0xc0017e69e7 0xc0017e69e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017e6a60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017e6a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-19 13:02:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.391: INFO: Pod "nginx-deployment-5c98f8fb5-sxtd4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sxtd4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-5c98f8fb5-sxtd4,UID:11dffd46-5318-11ea-a994-fa163e34d433,ResourceVersion:22205021,Generation:0,CreationTimestamp:2020-02-19 13:02:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0cfbd04e-5318-11ea-a994-fa163e34d433 0xc0017e6fb7 0xc0017e6fb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017e70d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017e7100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.392: INFO: Pod "nginx-deployment-5c98f8fb5-w4j7l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-w4j7l,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-5c98f8fb5-w4j7l,UID:0d1ef43d-5318-11ea-a994-fa163e34d433,ResourceVersion:22204971,Generation:0,CreationTimestamp:2020-02-19 13:02:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0cfbd04e-5318-11ea-a994-fa163e34d433 0xc0017e7197 0xc0017e7198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017e72d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017e7310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-19 13:02:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.393: INFO: Pod "nginx-deployment-5c98f8fb5-xp2ld" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xp2ld,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-5c98f8fb5-xp2ld,UID:108ac931-5318-11ea-a994-fa163e34d433,ResourceVersion:22204998,Generation:0,CreationTimestamp:2020-02-19 13:02:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0cfbd04e-5318-11ea-a994-fa163e34d433 0xc0017e7487 0xc0017e7488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017e7530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017e7630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.393: INFO: Pod "nginx-deployment-85ddf47c5d-2t46g" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2t46g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-2t46g,UID:eb722b5a-5317-11ea-a994-fa163e34d433,ResourceVersion:22204900,Generation:0,CreationTimestamp:2020-02-19 13:01:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc0017e76c7 0xc0017e76c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017e7880} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017e78a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-02-19 13:01:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-19 13:02:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://72606cfdc537c490aa97266e54506bccf0a871d39ae440250bc38cf9d3a9965c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.394: INFO: Pod "nginx-deployment-85ddf47c5d-4n4ph" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4n4ph,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-4n4ph,UID:11dfddc8-5318-11ea-a994-fa163e34d433,ResourceVersion:22205019,Generation:0,CreationTimestamp:2020-02-19 13:02:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc0017e7997 0xc0017e7998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017e7a40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017e7a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.394: INFO: Pod "nginx-deployment-85ddf47c5d-5dxdz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5dxdz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-5dxdz,UID:12614f7a-5318-11ea-a994-fa163e34d433,ResourceVersion:22205027,Generation:0,CreationTimestamp:2020-02-19 13:02:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc0017e7b57 0xc0017e7b58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017e7bd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017e7bf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.394: INFO: Pod "nginx-deployment-85ddf47c5d-854hf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-854hf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-854hf,UID:eb89486f-5317-11ea-a994-fa163e34d433,ResourceVersion:22204903,Generation:0,CreationTimestamp:2020-02-19 13:01:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc0017e7c77 0xc0017e7c78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017e7d10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017e7d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-19 13:01:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-19 13:02:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://34b901e5d64ba71ac4d21a5d3c3d9309df1912d18a29d1343fa76cb2a0c88610}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.395: INFO: Pod "nginx-deployment-85ddf47c5d-bbmn4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bbmn4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-bbmn4,UID:eb890d4d-5317-11ea-a994-fa163e34d433,ResourceVersion:22204911,Generation:0,CreationTimestamp:2020-02-19 13:01:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc000030b87 0xc000030b88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000030d00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000030d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-19 13:01:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-19 13:02:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4274a2e4d6b90b7bc7748a4bb3732a30b395a9c7c7780e6ce1fddd743a98af6a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.396: INFO: Pod "nginx-deployment-85ddf47c5d-fk2rj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fk2rj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-fk2rj,UID:eb6f8452-5317-11ea-a994-fa163e34d433,ResourceVersion:22204880,Generation:0,CreationTimestamp:2020-02-19 13:01:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc0000358c7 0xc0000358c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000035c50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000035c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-19 13:01:19 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-19 13:02:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ff9ddbaa11431a7b0a25cd7f20fca07a08ed44945affc06ebbea0e549d42fc9f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.396: INFO: Pod "nginx-deployment-85ddf47c5d-fvns8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fvns8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-fvns8,UID:eb896cdb-5317-11ea-a994-fa163e34d433,ResourceVersion:22204907,Generation:0,CreationTimestamp:2020-02-19 13:01:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc000035fd7 0xc000035fd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0004420f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0004425d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-19 13:01:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-19 13:02:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2deb34b6ae3657f28fc6ded0a7865c417bb6940515f43a1940c2a8b5a7fa0f15}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.397: INFO: Pod "nginx-deployment-85ddf47c5d-jlz6t" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jlz6t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-jlz6t,UID:eb550013-5317-11ea-a994-fa163e34d433,ResourceVersion:22204883,Generation:0,CreationTimestamp:2020-02-19 13:01:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc000056c07 0xc000056c08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000057610} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0000578e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-19 13:01:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-19 13:02:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d645126f43fa5d387f82f1fe05813e373d8f33a48bcfc3d482bf2acbe6d25448}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.397: INFO: Pod "nginx-deployment-85ddf47c5d-kjrlq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kjrlq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-kjrlq,UID:105a82d3-5318-11ea-a994-fa163e34d433,ResourceVersion:22204989,Generation:0,CreationTimestamp:2020-02-19 13:02:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc000057cf7 0xc000057cf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001256070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012560a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.398: INFO: Pod "nginx-deployment-85ddf47c5d-kpdsk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kpdsk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-kpdsk,UID:11df0ed0-5318-11ea-a994-fa163e34d433,ResourceVersion:22205023,Generation:0,CreationTimestamp:2020-02-19 13:02:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc001256cb7 0xc001256cb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001256d30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001256d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.398: INFO: Pod "nginx-deployment-85ddf47c5d-l26jg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l26jg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-l26jg,UID:eb4b6928-5317-11ea-a994-fa163e34d433,ResourceVersion:22204897,Generation:0,CreationTimestamp:2020-02-19 13:01:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc001257287 0xc001257288}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001257300} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001257340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-19 13:01:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-19 13:02:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://612718ae33620abb7ae6eb13de0228d56971fe106262752f34d5b786c8dba349}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.399: INFO: Pod "nginx-deployment-85ddf47c5d-ltlbr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ltlbr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-ltlbr,UID:125e075f-5318-11ea-a994-fa163e34d433,ResourceVersion:22205033,Generation:0,CreationTimestamp:2020-02-19 13:02:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc001257417 0xc001257418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012578c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012578e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.399: INFO: Pod "nginx-deployment-85ddf47c5d-m8bd8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m8bd8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-m8bd8,UID:126da765-5318-11ea-a994-fa163e34d433,ResourceVersion:22205034,Generation:0,CreationTimestamp:2020-02-19 13:02:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc001257967 0xc001257968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012579e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001257a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.400: INFO: Pod "nginx-deployment-85ddf47c5d-prkp2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-prkp2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-prkp2,UID:126be602-5318-11ea-a994-fa163e34d433,ResourceVersion:22205028,Generation:0,CreationTimestamp:2020-02-19 13:02:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc001257ae7 0xc001257ae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001257b50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001257b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.400: INFO: Pod "nginx-deployment-85ddf47c5d-ptqpn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ptqpn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-ptqpn,UID:eb54a97d-5317-11ea-a994-fa163e34d433,ResourceVersion:22204893,Generation:0,CreationTimestamp:2020-02-19 13:01:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc001257be7 0xc001257be8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001257c70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001257c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:01:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-19 13:01:19 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-19 13:02:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e9784d49bd43e0bbf25820d9dc8749cc0b53d52f260c2daf642d3698879b0c98}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.400: INFO: Pod "nginx-deployment-85ddf47c5d-ssvnx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ssvnx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-ssvnx,UID:11dfc77b-5318-11ea-a994-fa163e34d433,ResourceVersion:22205014,Generation:0,CreationTimestamp:2020-02-19 13:02:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc001257d57 0xc001257d58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001257dd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001257e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.401: INFO: Pod "nginx-deployment-85ddf47c5d-vgkpf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vgkpf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-vgkpf,UID:126971ba-5318-11ea-a994-fa163e34d433,ResourceVersion:22205029,Generation:0,CreationTimestamp:2020-02-19 13:02:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc001257e87 0xc001257e88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001257ef0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001257f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.401: INFO: Pod "nginx-deployment-85ddf47c5d-wfmm8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wfmm8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-wfmm8,UID:11df8ac4-5318-11ea-a994-fa163e34d433,ResourceVersion:22205024,Generation:0,CreationTimestamp:2020-02-19 13:02:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc001257f87 0xc001257f88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017f4140} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017f4160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.402: INFO: Pod "nginx-deployment-85ddf47c5d-wzbqw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wzbqw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-wzbqw,UID:1088bbfa-5318-11ea-a994-fa163e34d433,ResourceVersion:22204995,Generation:0,CreationTimestamp:2020-02-19 13:02:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc0017f42c7 0xc0017f42c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017f43c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017f43e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 19 13:02:25.402: INFO: Pod "nginx-deployment-85ddf47c5d-xzjgp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xzjgp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-88z4s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-88z4s/pods/nginx-deployment-85ddf47c5d-xzjgp,UID:10891877-5318-11ea-a994-fa163e34d433,ResourceVersion:22204992,Generation:0,CreationTimestamp:2020-02-19 13:02:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d eb39173e-5317-11ea-a994-fa163e34d433 0xc0017f4457 0xc0017f4458}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hsr25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hsr25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hsr25 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017f4670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017f4690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-19 13:02:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:02:25.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-88z4s" for this suite.
Feb 19 13:03:56.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:03:59.460: INFO: namespace: e2e-tests-deployment-88z4s, resource: bindings, ignored listing per whitelist
Feb 19 13:03:59.546: INFO: namespace e2e-tests-deployment-88z4s deletion completed in 1m34.101671076s

• [SLOW TEST:161.329 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:03:59.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Feb 19 13:04:00.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 19 13:04:03.770: INFO: stderr: ""
Feb 19 13:04:03.771: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:04:03.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bxdk5" for this suite.
Feb 19 13:04:16.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:04:16.733: INFO: namespace: e2e-tests-kubectl-bxdk5, resource: bindings, ignored listing per whitelist
Feb 19 13:04:16.787: INFO: namespace e2e-tests-kubectl-bxdk5 deletion completed in 12.780977417s

• [SLOW TEST:17.241 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:04:16.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 19 13:04:18.849: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:04:47.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-nqkx4" for this suite.
Feb 19 13:05:12.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:05:12.190: INFO: namespace: e2e-tests-init-container-nqkx4, resource: bindings, ignored listing per whitelist
Feb 19 13:05:12.413: INFO: namespace e2e-tests-init-container-nqkx4 deletion completed in 24.402963123s

• [SLOW TEST:55.625 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:05:12.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-qcsgk
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb 19 13:05:12.682: INFO: Found 0 stateful pods, waiting for 3
Feb 19 13:05:22.786: INFO: Found 2 stateful pods, waiting for 3
Feb 19 13:05:32.996: INFO: Found 2 stateful pods, waiting for 3
Feb 19 13:05:42.901: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 13:05:42.902: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 13:05:42.902: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 19 13:05:52.697: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 13:05:52.698: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 13:05:52.698: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 19 13:05:52.738: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 19 13:06:02.944: INFO: Updating stateful set ss2
Feb 19 13:06:02.959: INFO: Waiting for Pod e2e-tests-statefulset-qcsgk/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 19 13:06:13.335: INFO: Found 1 stateful pods, waiting for 3
Feb 19 13:06:23.547: INFO: Found 2 stateful pods, waiting for 3
Feb 19 13:06:33.476: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 13:06:33.476: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 13:06:33.476: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 19 13:06:43.352: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 13:06:43.352: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 19 13:06:43.352: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 19 13:06:43.485: INFO: Updating stateful set ss2
Feb 19 13:06:43.522: INFO: Waiting for Pod e2e-tests-statefulset-qcsgk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 19 13:06:53.907: INFO: Updating stateful set ss2
Feb 19 13:06:54.250: INFO: Waiting for StatefulSet e2e-tests-statefulset-qcsgk/ss2 to complete update
Feb 19 13:06:54.250: INFO: Waiting for Pod e2e-tests-statefulset-qcsgk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 19 13:07:04.581: INFO: Waiting for StatefulSet e2e-tests-statefulset-qcsgk/ss2 to complete update
Feb 19 13:07:04.581: INFO: Waiting for Pod e2e-tests-statefulset-qcsgk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 19 13:07:14.435: INFO: Waiting for StatefulSet e2e-tests-statefulset-qcsgk/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 19 13:07:24.281: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qcsgk
Feb 19 13:07:24.285: INFO: Scaling statefulset ss2 to 0
Feb 19 13:07:54.346: INFO: Waiting for statefulset status.replicas updated to 0
Feb 19 13:07:54.351: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:07:54.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-qcsgk" for this suite.
Feb 19 13:08:04.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:08:05.127: INFO: namespace: e2e-tests-statefulset-qcsgk, resource: bindings, ignored listing per whitelist
Feb 19 13:08:05.330: INFO: namespace e2e-tests-statefulset-qcsgk deletion completed in 10.811087109s

• [SLOW TEST:172.917 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:08:05.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 19 13:08:18.395: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ddf9fdc9-5318-11ea-a0a3-0242ac110008"
Feb 19 13:08:18.395: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ddf9fdc9-5318-11ea-a0a3-0242ac110008" in namespace "e2e-tests-pods-pptvf" to be "terminated due to deadline exceeded"
Feb 19 13:08:18.420: INFO: Pod "pod-update-activedeadlineseconds-ddf9fdc9-5318-11ea-a0a3-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 24.426644ms
Feb 19 13:08:21.196: INFO: Pod "pod-update-activedeadlineseconds-ddf9fdc9-5318-11ea-a0a3-0242ac110008": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.800231765s
Feb 19 13:08:21.196: INFO: Pod "pod-update-activedeadlineseconds-ddf9fdc9-5318-11ea-a0a3-0242ac110008" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:08:21.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pptvf" for this suite.
Feb 19 13:08:29.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:08:29.999: INFO: namespace: e2e-tests-pods-pptvf, resource: bindings, ignored listing per whitelist
Feb 19 13:08:30.016: INFO: namespace e2e-tests-pods-pptvf deletion completed in 8.80719587s

• [SLOW TEST:24.686 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:08:30.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 19 13:08:30.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-c8hpg'
Feb 19 13:08:30.554: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 19 13:08:30.554: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Feb 19 13:08:32.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-c8hpg'
Feb 19 13:08:33.312: INFO: stderr: ""
Feb 19 13:08:33.312: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:08:33.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-c8hpg" for this suite.
Feb 19 13:08:40.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:08:40.227: INFO: namespace: e2e-tests-kubectl-c8hpg, resource: bindings, ignored listing per whitelist
Feb 19 13:08:40.297: INFO: namespace e2e-tests-kubectl-c8hpg deletion completed in 6.95814493s

• [SLOW TEST:10.281 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:08:40.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-2dgnq
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 19 13:08:40.542: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 19 13:09:25.125: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-2dgnq PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 19 13:09:25.125: INFO: >>> kubeConfig: /root/.kube/config
I0219 13:09:25.186509       8 log.go:172] (0xc0000ebc30) (0xc001c563c0) Create stream
I0219 13:09:25.186766       8 log.go:172] (0xc0000ebc30) (0xc001c563c0) Stream added, broadcasting: 1
I0219 13:09:25.204479       8 log.go:172] (0xc0000ebc30) Reply frame received for 1
I0219 13:09:25.204823       8 log.go:172] (0xc0000ebc30) (0xc0008c2460) Create stream
I0219 13:09:25.204885       8 log.go:172] (0xc0000ebc30) (0xc0008c2460) Stream added, broadcasting: 3
I0219 13:09:25.207518       8 log.go:172] (0xc0000ebc30) Reply frame received for 3
I0219 13:09:25.207582       8 log.go:172] (0xc0000ebc30) (0xc0020aa140) Create stream
I0219 13:09:25.207604       8 log.go:172] (0xc0000ebc30) (0xc0020aa140) Stream added, broadcasting: 5
I0219 13:09:25.210600       8 log.go:172] (0xc0000ebc30) Reply frame received for 5
I0219 13:09:25.535739       8 log.go:172] (0xc0000ebc30) Data frame received for 3
I0219 13:09:25.535872       8 log.go:172] (0xc0008c2460) (3) Data frame handling
I0219 13:09:25.535904       8 log.go:172] (0xc0008c2460) (3) Data frame sent
I0219 13:09:25.704853       8 log.go:172] (0xc0000ebc30) (0xc0008c2460) Stream removed, broadcasting: 3
I0219 13:09:25.705090       8 log.go:172] (0xc0000ebc30) Data frame received for 1
I0219 13:09:25.705131       8 log.go:172] (0xc001c563c0) (1) Data frame handling
I0219 13:09:25.705203       8 log.go:172] (0xc001c563c0) (1) Data frame sent
I0219 13:09:25.705235       8 log.go:172] (0xc0000ebc30) (0xc001c563c0) Stream removed, broadcasting: 1
I0219 13:09:25.705765       8 log.go:172] (0xc0000ebc30) (0xc0020aa140) Stream removed, broadcasting: 5
I0219 13:09:25.705844       8 log.go:172] (0xc0000ebc30) (0xc001c563c0) Stream removed, broadcasting: 1
I0219 13:09:25.705877       8 log.go:172] (0xc0000ebc30) (0xc0008c2460) Stream removed, broadcasting: 3
I0219 13:09:25.705889       8 log.go:172] (0xc0000ebc30) (0xc0020aa140) Stream removed, broadcasting: 5
I0219 13:09:25.706241       8 log.go:172] (0xc0000ebc30) Go away received
Feb 19 13:09:25.706: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:09:25.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-2dgnq" for this suite.
Feb 19 13:09:49.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:09:49.954: INFO: namespace: e2e-tests-pod-network-test-2dgnq, resource: bindings, ignored listing per whitelist
Feb 19 13:09:50.030: INFO: namespace e2e-tests-pod-network-test-2dgnq deletion completed in 24.287252332s

• [SLOW TEST:69.732 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:09:50.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-2r4h2
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 19 13:09:50.277: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 19 13:10:24.628: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-2r4h2 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 19 13:10:24.628: INFO: >>> kubeConfig: /root/.kube/config
I0219 13:10:24.700567       8 log.go:172] (0xc00279c2c0) (0xc001793ea0) Create stream
I0219 13:10:24.700836       8 log.go:172] (0xc00279c2c0) (0xc001793ea0) Stream added, broadcasting: 1
I0219 13:10:24.717880       8 log.go:172] (0xc00279c2c0) Reply frame received for 1
I0219 13:10:24.717940       8 log.go:172] (0xc00279c2c0) (0xc0010286e0) Create stream
I0219 13:10:24.717951       8 log.go:172] (0xc00279c2c0) (0xc0010286e0) Stream added, broadcasting: 3
I0219 13:10:24.719990       8 log.go:172] (0xc00279c2c0) Reply frame received for 3
I0219 13:10:24.720026       8 log.go:172] (0xc00279c2c0) (0xc001450460) Create stream
I0219 13:10:24.720035       8 log.go:172] (0xc00279c2c0) (0xc001450460) Stream added, broadcasting: 5
I0219 13:10:24.722797       8 log.go:172] (0xc00279c2c0) Reply frame received for 5
I0219 13:10:25.206012       8 log.go:172] (0xc00279c2c0) Data frame received for 3
I0219 13:10:25.206167       8 log.go:172] (0xc0010286e0) (3) Data frame handling
I0219 13:10:25.206246       8 log.go:172] (0xc0010286e0) (3) Data frame sent
I0219 13:10:25.363664       8 log.go:172] (0xc00279c2c0) (0xc0010286e0) Stream removed, broadcasting: 3
I0219 13:10:25.364147       8 log.go:172] (0xc00279c2c0) Data frame received for 1
I0219 13:10:25.364277       8 log.go:172] (0xc00279c2c0) (0xc001450460) Stream removed, broadcasting: 5
I0219 13:10:25.364344       8 log.go:172] (0xc001793ea0) (1) Data frame handling
I0219 13:10:25.364400       8 log.go:172] (0xc001793ea0) (1) Data frame sent
I0219 13:10:25.364430       8 log.go:172] (0xc00279c2c0) (0xc001793ea0) Stream removed, broadcasting: 1
I0219 13:10:25.364456       8 log.go:172] (0xc00279c2c0) Go away received
I0219 13:10:25.364783       8 log.go:172] (0xc00279c2c0) (0xc001793ea0) Stream removed, broadcasting: 1
I0219 13:10:25.364805       8 log.go:172] (0xc00279c2c0) (0xc0010286e0) Stream removed, broadcasting: 3
I0219 13:10:25.364816       8 log.go:172] (0xc00279c2c0) (0xc001450460) Stream removed, broadcasting: 5
Feb 19 13:10:25.365: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:10:25.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-2r4h2" for this suite.
Feb 19 13:10:51.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:10:51.609: INFO: namespace: e2e-tests-pod-network-test-2r4h2, resource: bindings, ignored listing per whitelist
Feb 19 13:10:51.621: INFO: namespace e2e-tests-pod-network-test-2r4h2 deletion completed in 26.230468283s

• [SLOW TEST:61.590 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:10:51.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Feb 19 13:10:51.889: INFO: Waiting up to 5m0s for pod "client-containers-4108d307-5319-11ea-a0a3-0242ac110008" in namespace "e2e-tests-containers-k88mc" to be "success or failure"
Feb 19 13:10:51.902: INFO: Pod "client-containers-4108d307-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.804233ms
Feb 19 13:10:53.919: INFO: Pod "client-containers-4108d307-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030073262s
Feb 19 13:10:56.216: INFO: Pod "client-containers-4108d307-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326984906s
Feb 19 13:10:59.836: INFO: Pod "client-containers-4108d307-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.947439396s
Feb 19 13:11:01.859: INFO: Pod "client-containers-4108d307-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.970128617s
Feb 19 13:11:03.886: INFO: Pod "client-containers-4108d307-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.996932929s
Feb 19 13:11:05.905: INFO: Pod "client-containers-4108d307-5319-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.015931243s
STEP: Saw pod success
Feb 19 13:11:05.905: INFO: Pod "client-containers-4108d307-5319-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 13:11:05.910: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-4108d307-5319-11ea-a0a3-0242ac110008 container test-container: 
STEP: delete the pod
Feb 19 13:11:05.995: INFO: Waiting for pod client-containers-4108d307-5319-11ea-a0a3-0242ac110008 to disappear
Feb 19 13:11:06.004: INFO: Pod client-containers-4108d307-5319-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:11:06.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-k88mc" for this suite.
Feb 19 13:11:12.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:11:12.132: INFO: namespace: e2e-tests-containers-k88mc, resource: bindings, ignored listing per whitelist
Feb 19 13:11:12.260: INFO: namespace e2e-tests-containers-k88mc deletion completed in 6.247953809s

• [SLOW TEST:20.638 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:11:12.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 19 13:11:27.440: INFO: Successfully updated pod "annotationupdate4d6983d4-5319-11ea-a0a3-0242ac110008"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:11:29.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4phwz" for this suite.
Feb 19 13:11:53.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:11:53.796: INFO: namespace: e2e-tests-projected-4phwz, resource: bindings, ignored listing per whitelist
Feb 19 13:11:53.837: INFO: namespace e2e-tests-projected-4phwz deletion completed in 24.281412238s

• [SLOW TEST:41.577 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:11:53.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-5wnrs/secret-test-6640100a-5319-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 19 13:11:54.361: INFO: Waiting up to 5m0s for pod "pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008" in namespace "e2e-tests-secrets-5wnrs" to be "success or failure"
Feb 19 13:11:54.520: INFO: Pod "pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 159.181123ms
Feb 19 13:11:56.911: INFO: Pod "pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.549684215s
Feb 19 13:11:59.310: INFO: Pod "pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.948927765s
Feb 19 13:12:01.426: INFO: Pod "pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.065089369s
Feb 19 13:12:04.477: INFO: Pod "pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116253106s
Feb 19 13:12:08.430: INFO: Pod "pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.069302194s
Feb 19 13:12:11.342: INFO: Pod "pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.980562237s
Feb 19 13:12:13.411: INFO: Pod "pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.050433876s
Feb 19 13:12:15.645: INFO: Pod "pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.283763846s
Feb 19 13:12:17.700: INFO: Pod "pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.339389557s
STEP: Saw pod success
Feb 19 13:12:17.701: INFO: Pod "pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 13:12:17.759: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008 container env-test: 
STEP: delete the pod
Feb 19 13:12:18.504: INFO: Waiting for pod pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008 to disappear
Feb 19 13:12:18.547: INFO: Pod pod-configmaps-6641e173-5319-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:12:18.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5wnrs" for this suite.
Feb 19 13:12:26.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:12:26.063: INFO: namespace: e2e-tests-secrets-5wnrs, resource: bindings, ignored listing per whitelist
Feb 19 13:12:26.202: INFO: namespace e2e-tests-secrets-5wnrs deletion completed in 7.640048223s

• [SLOW TEST:32.364 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:12:26.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 19 13:12:26.606: INFO: Waiting up to 5m0s for pod "downwardapi-volume-797d890b-5319-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-6nlbt" to be "success or failure"
Feb 19 13:12:26.641: INFO: Pod "downwardapi-volume-797d890b-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 34.14941ms
Feb 19 13:12:28.815: INFO: Pod "downwardapi-volume-797d890b-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208212222s
Feb 19 13:12:30.829: INFO: Pod "downwardapi-volume-797d890b-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222120668s
Feb 19 13:12:32.837: INFO: Pod "downwardapi-volume-797d890b-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.230745403s
Feb 19 13:12:34.946: INFO: Pod "downwardapi-volume-797d890b-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.339056537s
Feb 19 13:12:36.958: INFO: Pod "downwardapi-volume-797d890b-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.351723184s
Feb 19 13:12:39.820: INFO: Pod "downwardapi-volume-797d890b-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.213542925s
Feb 19 13:12:41.841: INFO: Pod "downwardapi-volume-797d890b-5319-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.234101871s
STEP: Saw pod success
Feb 19 13:12:41.841: INFO: Pod "downwardapi-volume-797d890b-5319-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 13:12:41.870: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-797d890b-5319-11ea-a0a3-0242ac110008 container client-container: 
STEP: delete the pod
Feb 19 13:12:42.833: INFO: Waiting for pod downwardapi-volume-797d890b-5319-11ea-a0a3-0242ac110008 to disappear
Feb 19 13:12:42.843: INFO: Pod downwardapi-volume-797d890b-5319-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:12:42.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6nlbt" for this suite.
Feb 19 13:12:49.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:12:49.178: INFO: namespace: e2e-tests-downward-api-6nlbt, resource: bindings, ignored listing per whitelist
Feb 19 13:12:49.204: INFO: namespace e2e-tests-downward-api-6nlbt deletion completed in 6.232309449s

• [SLOW TEST:23.002 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:12:49.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-8714bf3f-5319-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 19 13:12:49.535: INFO: Waiting up to 5m0s for pod "pod-configmaps-8719c191-5319-11ea-a0a3-0242ac110008" in namespace "e2e-tests-configmap-qctdv" to be "success or failure"
Feb 19 13:12:49.583: INFO: Pod "pod-configmaps-8719c191-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 47.768544ms
Feb 19 13:12:51.871: INFO: Pod "pod-configmaps-8719c191-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335071528s
Feb 19 13:12:53.888: INFO: Pod "pod-configmaps-8719c191-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352521601s
Feb 19 13:12:55.907: INFO: Pod "pod-configmaps-8719c191-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.371269875s
Feb 19 13:12:58.205: INFO: Pod "pod-configmaps-8719c191-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.669840496s
Feb 19 13:13:00.457: INFO: Pod "pod-configmaps-8719c191-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.921158676s
Feb 19 13:13:03.945: INFO: Pod "pod-configmaps-8719c191-5319-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.409236712s
STEP: Saw pod success
Feb 19 13:13:03.945: INFO: Pod "pod-configmaps-8719c191-5319-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 13:13:04.326: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8719c191-5319-11ea-a0a3-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 19 13:13:04.477: INFO: Waiting for pod pod-configmaps-8719c191-5319-11ea-a0a3-0242ac110008 to disappear
Feb 19 13:13:04.491: INFO: Pod pod-configmaps-8719c191-5319-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:13:04.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-qctdv" for this suite.
Feb 19 13:13:10.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:13:10.934: INFO: namespace: e2e-tests-configmap-qctdv, resource: bindings, ignored listing per whitelist
Feb 19 13:13:10.934: INFO: namespace e2e-tests-configmap-qctdv deletion completed in 6.428438311s

• [SLOW TEST:21.730 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:13:10.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 19 13:13:11.215: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.472587ms)
Feb 19 13:13:11.248: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.653798ms)
Feb 19 13:13:11.404: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 155.796065ms)
Feb 19 13:13:11.413: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.715621ms)
Feb 19 13:13:11.418: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.288304ms)
Feb 19 13:13:11.423: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.729482ms)
Feb 19 13:13:11.429: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.451715ms)
Feb 19 13:13:11.434: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.23918ms)
Feb 19 13:13:11.438: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.380402ms)
Feb 19 13:13:11.444: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.827872ms)
Feb 19 13:13:11.449: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.46044ms)
Feb 19 13:13:11.453: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.119436ms)
Feb 19 13:13:11.457: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.073583ms)
Feb 19 13:13:11.461: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.360111ms)
Feb 19 13:13:11.465: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.04832ms)
Feb 19 13:13:11.469: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.92404ms)
Feb 19 13:13:11.473: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.635433ms)
Feb 19 13:13:11.477: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.910949ms)
Feb 19 13:13:11.481: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.10004ms)
Feb 19 13:13:11.485: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.309966ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:13:11.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-8k4fx" for this suite.
Feb 19 13:13:17.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:13:17.649: INFO: namespace: e2e-tests-proxy-8k4fx, resource: bindings, ignored listing per whitelist
Feb 19 13:13:17.735: INFO: namespace e2e-tests-proxy-8k4fx deletion completed in 6.246202724s

• [SLOW TEST:6.800 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:13:17.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 19 13:13:18.198: INFO: Waiting up to 5m0s for pod "downward-api-982e35ae-5319-11ea-a0a3-0242ac110008" in namespace "e2e-tests-downward-api-88mxf" to be "success or failure"
Feb 19 13:13:18.216: INFO: Pod "downward-api-982e35ae-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.076328ms
Feb 19 13:13:20.239: INFO: Pod "downward-api-982e35ae-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041057744s
Feb 19 13:13:22.323: INFO: Pod "downward-api-982e35ae-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125122735s
Feb 19 13:13:24.337: INFO: Pod "downward-api-982e35ae-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13932271s
Feb 19 13:13:27.521: INFO: Pod "downward-api-982e35ae-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.322835797s
Feb 19 13:13:29.537: INFO: Pod "downward-api-982e35ae-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.339592031s
Feb 19 13:13:31.553: INFO: Pod "downward-api-982e35ae-5319-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.355498406s
STEP: Saw pod success
Feb 19 13:13:31.553: INFO: Pod "downward-api-982e35ae-5319-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 13:13:31.557: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-982e35ae-5319-11ea-a0a3-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 19 13:13:31.718: INFO: Waiting for pod downward-api-982e35ae-5319-11ea-a0a3-0242ac110008 to disappear
Feb 19 13:13:31.743: INFO: Pod downward-api-982e35ae-5319-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:13:31.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-88mxf" for this suite.
Feb 19 13:13:37.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:13:37.902: INFO: namespace: e2e-tests-downward-api-88mxf, resource: bindings, ignored listing per whitelist
Feb 19 13:13:37.971: INFO: namespace e2e-tests-downward-api-88mxf deletion completed in 6.207077966s

• [SLOW TEST:20.235 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:13:37.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Feb 19 13:13:48.316: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-a4297860-5319-11ea-a0a3-0242ac110008", GenerateName:"", Namespace:"e2e-tests-pods-nkkl5", SelfLink:"/api/v1/namespaces/e2e-tests-pods-nkkl5/pods/pod-submit-remove-a4297860-5319-11ea-a0a3-0242ac110008", UID:"a42b286f-5319-11ea-a994-fa163e34d433", ResourceVersion:"22206614", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717714818, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"169059566"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-j6gq2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001ceee80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j6gq2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002224768), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001333d40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022247a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022247c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0022247c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0022247cc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717714818, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717714828, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717714828, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717714818, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0021e7820), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0021e7840), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://5b14671111ce95eaec033e23ac2a3f7bfb5e7d5ef768797145c0f452f43b0c5b"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:14:02.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-nkkl5" for this suite.
Feb 19 13:14:08.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:14:08.927: INFO: namespace: e2e-tests-pods-nkkl5, resource: bindings, ignored listing per whitelist
Feb 19 13:14:08.970: INFO: namespace e2e-tests-pods-nkkl5 deletion completed in 6.194359475s

• [SLOW TEST:30.999 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:14:08.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-b6ab2ba2-5319-11ea-a0a3-0242ac110008
STEP: Creating configMap with name cm-test-opt-upd-b6ab2c5d-5319-11ea-a0a3-0242ac110008
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b6ab2ba2-5319-11ea-a0a3-0242ac110008
STEP: Updating configmap cm-test-opt-upd-b6ab2c5d-5319-11ea-a0a3-0242ac110008
STEP: Creating configMap with name cm-test-opt-create-b6ab2ca8-5319-11ea-a0a3-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:14:31.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h6xfm" for this suite.
Feb 19 13:14:55.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:14:55.644: INFO: namespace: e2e-tests-configmap-h6xfm, resource: bindings, ignored listing per whitelist
Feb 19 13:14:55.929: INFO: namespace e2e-tests-configmap-h6xfm deletion completed in 24.345553239s

• [SLOW TEST:46.958 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:14:55.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Feb 19 13:14:56.250: INFO: Waiting up to 5m0s for pod "var-expansion-d2ad5c5b-5319-11ea-a0a3-0242ac110008" in namespace "e2e-tests-var-expansion-wjqwd" to be "success or failure"
Feb 19 13:14:56.411: INFO: Pod "var-expansion-d2ad5c5b-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 160.818907ms
Feb 19 13:14:58.620: INFO: Pod "var-expansion-d2ad5c5b-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369444067s
Feb 19 13:15:00.653: INFO: Pod "var-expansion-d2ad5c5b-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.402977723s
Feb 19 13:15:03.530: INFO: Pod "var-expansion-d2ad5c5b-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.279742196s
Feb 19 13:15:05.545: INFO: Pod "var-expansion-d2ad5c5b-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.294432844s
Feb 19 13:15:07.558: INFO: Pod "var-expansion-d2ad5c5b-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.307655546s
Feb 19 13:15:09.594: INFO: Pod "var-expansion-d2ad5c5b-5319-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.344068755s
STEP: Saw pod success
Feb 19 13:15:09.595: INFO: Pod "var-expansion-d2ad5c5b-5319-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 13:15:09.616: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-d2ad5c5b-5319-11ea-a0a3-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 19 13:15:09.883: INFO: Waiting for pod var-expansion-d2ad5c5b-5319-11ea-a0a3-0242ac110008 to disappear
Feb 19 13:15:10.054: INFO: Pod var-expansion-d2ad5c5b-5319-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:15:10.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-wjqwd" for this suite.
Feb 19 13:15:18.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:15:18.246: INFO: namespace: e2e-tests-var-expansion-wjqwd, resource: bindings, ignored listing per whitelist
Feb 19 13:15:18.293: INFO: namespace e2e-tests-var-expansion-wjqwd deletion completed in 8.221991758s

• [SLOW TEST:22.363 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:15:18.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 19 13:15:18.701: INFO: Waiting up to 5m0s for pod "pod-e00ed5a5-5319-11ea-a0a3-0242ac110008" in namespace "e2e-tests-emptydir-gh4pk" to be "success or failure"
Feb 19 13:15:18.737: INFO: Pod "pod-e00ed5a5-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 35.979147ms
Feb 19 13:15:21.194: INFO: Pod "pod-e00ed5a5-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492784885s
Feb 19 13:15:23.204: INFO: Pod "pod-e00ed5a5-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.503098866s
Feb 19 13:15:25.339: INFO: Pod "pod-e00ed5a5-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.63779752s
Feb 19 13:15:27.986: INFO: Pod "pod-e00ed5a5-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.285525032s
Feb 19 13:15:30.378: INFO: Pod "pod-e00ed5a5-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.676858132s
Feb 19 13:15:32.422: INFO: Pod "pod-e00ed5a5-5319-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.72077709s
STEP: Saw pod success
Feb 19 13:15:32.422: INFO: Pod "pod-e00ed5a5-5319-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 13:15:32.430: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e00ed5a5-5319-11ea-a0a3-0242ac110008 container test-container: 
STEP: delete the pod
Feb 19 13:15:33.526: INFO: Waiting for pod pod-e00ed5a5-5319-11ea-a0a3-0242ac110008 to disappear
Feb 19 13:15:33.992: INFO: Pod pod-e00ed5a5-5319-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:15:33.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-gh4pk" for this suite.
Feb 19 13:15:40.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:15:40.172: INFO: namespace: e2e-tests-emptydir-gh4pk, resource: bindings, ignored listing per whitelist
Feb 19 13:15:40.401: INFO: namespace e2e-tests-emptydir-gh4pk deletion completed in 6.39389693s

• [SLOW TEST:22.108 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:15:40.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-ed7542a6-5319-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 19 13:15:41.225: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ed80d17f-5319-11ea-a0a3-0242ac110008" in namespace "e2e-tests-projected-47zj7" to be "success or failure"
Feb 19 13:15:41.230: INFO: Pod "pod-projected-configmaps-ed80d17f-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.954148ms
Feb 19 13:15:43.388: INFO: Pod "pod-projected-configmaps-ed80d17f-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162669625s
Feb 19 13:15:45.411: INFO: Pod "pod-projected-configmaps-ed80d17f-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185765872s
Feb 19 13:15:48.431: INFO: Pod "pod-projected-configmaps-ed80d17f-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.205756815s
Feb 19 13:15:50.445: INFO: Pod "pod-projected-configmaps-ed80d17f-5319-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.219592915s
Feb 19 13:15:52.462: INFO: Pod "pod-projected-configmaps-ed80d17f-5319-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.236897992s
STEP: Saw pod success
Feb 19 13:15:52.463: INFO: Pod "pod-projected-configmaps-ed80d17f-5319-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 13:15:52.475: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ed80d17f-5319-11ea-a0a3-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 19 13:15:52.727: INFO: Waiting for pod pod-projected-configmaps-ed80d17f-5319-11ea-a0a3-0242ac110008 to disappear
Feb 19 13:15:52.755: INFO: Pod pod-projected-configmaps-ed80d17f-5319-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:15:52.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-47zj7" for this suite.
Feb 19 13:15:59.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:15:59.225: INFO: namespace: e2e-tests-projected-47zj7, resource: bindings, ignored listing per whitelist
Feb 19 13:15:59.398: INFO: namespace e2e-tests-projected-47zj7 deletion completed in 6.623373596s

• [SLOW TEST:18.996 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:15:59.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 19 13:16:32.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 19 13:16:32.287: INFO: Pod pod-with-poststart-http-hook still exists
Feb 19 13:16:34.288: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 19 13:16:34.335: INFO: Pod pod-with-poststart-http-hook still exists
Feb 19 13:16:36.288: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 19 13:16:36.558: INFO: Pod pod-with-poststart-http-hook still exists
Feb 19 13:16:38.288: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 19 13:16:39.625: INFO: Pod pod-with-poststart-http-hook still exists
Feb 19 13:16:40.288: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 19 13:16:40.305: INFO: Pod pod-with-poststart-http-hook still exists
Feb 19 13:16:42.288: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 19 13:16:42.297: INFO: Pod pod-with-poststart-http-hook still exists
Feb 19 13:16:44.289: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 19 13:16:44.313: INFO: Pod pod-with-poststart-http-hook still exists
Feb 19 13:16:46.288: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 19 13:16:46.314: INFO: Pod pod-with-poststart-http-hook still exists
Feb 19 13:16:48.288: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 19 13:16:48.319: INFO: Pod pod-with-poststart-http-hook still exists
Feb 19 13:16:50.288: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 19 13:16:50.320: INFO: Pod pod-with-poststart-http-hook still exists
Feb 19 13:16:52.288: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 19 13:16:52.305: INFO: Pod pod-with-poststart-http-hook still exists
Feb 19 13:16:54.288: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 19 13:16:54.296: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:16:54.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-s5rv5" for this suite.
Feb 19 13:17:18.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:17:18.456: INFO: namespace: e2e-tests-container-lifecycle-hook-s5rv5, resource: bindings, ignored listing per whitelist
Feb 19 13:17:18.544: INFO: namespace e2e-tests-container-lifecycle-hook-s5rv5 deletion completed in 24.243579507s

• [SLOW TEST:79.146 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:17:18.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-27c24770-531a-11ea-a0a3-0242ac110008
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-27c24770-531a-11ea-a0a3-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:17:33.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k88fs" for this suite.
Feb 19 13:17:59.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:17:59.572: INFO: namespace: e2e-tests-projected-k88fs, resource: bindings, ignored listing per whitelist
Feb 19 13:17:59.619: INFO: namespace e2e-tests-projected-k88fs deletion completed in 26.377912462s

• [SLOW TEST:41.074 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:17:59.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-4017c4a1-531a-11ea-a0a3-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 19 13:17:59.899: INFO: Waiting up to 5m0s for pod "pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008" in namespace "e2e-tests-configmap-77l4x" to be "success or failure"
Feb 19 13:17:59.921: INFO: Pod "pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.813713ms
Feb 19 13:18:02.222: INFO: Pod "pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323066587s
Feb 19 13:18:04.236: INFO: Pod "pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33684881s
Feb 19 13:18:06.260: INFO: Pod "pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.360633969s
Feb 19 13:18:08.274: INFO: Pod "pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.375550072s
Feb 19 13:18:11.957: INFO: Pod "pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.058368136s
Feb 19 13:18:15.775: INFO: Pod "pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.875844377s
Feb 19 13:18:17.938: INFO: Pod "pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.038659338s
Feb 19 13:18:20.209: INFO: Pod "pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.309732368s
STEP: Saw pod success
Feb 19 13:18:20.209: INFO: Pod "pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008" satisfied condition "success or failure"
Feb 19 13:18:20.216: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 19 13:18:20.649: INFO: Waiting for pod pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008 to disappear
Feb 19 13:18:20.665: INFO: Pod pod-configmaps-40244cc2-531a-11ea-a0a3-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:18:20.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-77l4x" for this suite.
Feb 19 13:18:26.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:18:26.966: INFO: namespace: e2e-tests-configmap-77l4x, resource: bindings, ignored listing per whitelist
Feb 19 13:18:26.984: INFO: namespace e2e-tests-configmap-77l4x deletion completed in 6.298573692s

• [SLOW TEST:27.365 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 19 13:18:26.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 19 13:18:28.029: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-flmp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-flmp7/configmaps/e2e-watch-test-watch-closed,UID:50da1faf-531a-11ea-a994-fa163e34d433,ResourceVersion:22207156,Generation:0,CreationTimestamp:2020-02-19 13:18:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 19 13:18:28.029: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-flmp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-flmp7/configmaps/e2e-watch-test-watch-closed,UID:50da1faf-531a-11ea-a994-fa163e34d433,ResourceVersion:22207157,Generation:0,CreationTimestamp:2020-02-19 13:18:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 19 13:18:28.094: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-flmp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-flmp7/configmaps/e2e-watch-test-watch-closed,UID:50da1faf-531a-11ea-a994-fa163e34d433,ResourceVersion:22207159,Generation:0,CreationTimestamp:2020-02-19 13:18:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 19 13:18:28.094: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-flmp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-flmp7/configmaps/e2e-watch-test-watch-closed,UID:50da1faf-531a-11ea-a994-fa163e34d433,ResourceVersion:22207160,Generation:0,CreationTimestamp:2020-02-19 13:18:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 19 13:18:28.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-flmp7" for this suite.
Feb 19 13:18:34.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 19 13:18:34.427: INFO: namespace: e2e-tests-watch-flmp7, resource: bindings, ignored listing per whitelist
Feb 19 13:18:34.455: INFO: namespace e2e-tests-watch-flmp7 deletion completed in 6.279348476s

• [SLOW TEST:7.470 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSFeb 19 13:18:34.455: INFO: Running AfterSuite actions on all nodes
Feb 19 13:18:34.455: INFO: Running AfterSuite actions on node 1
Feb 19 13:18:34.455: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 9066.901 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS