I0203 10:47:17.009363 8 e2e.go:224] Starting e2e run "8ae9a159-4672-11ea-ab15-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580726835 - Will randomize all specs Will run 201 of 2164 specs Feb 3 10:47:17.576: INFO: >>> kubeConfig: /root/.kube/config Feb 3 10:47:17.580: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 3 10:47:17.600: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 3 10:47:17.642: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 3 10:47:17.642: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 3 10:47:17.642: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 3 10:47:17.657: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 3 10:47:17.657: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 3 10:47:17.657: INFO: e2e test version: v1.13.12 Feb 3 10:47:17.659: INFO: kube-apiserver version: v1.13.8 SSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 10:47:17.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets Feb 3 10:47:17.908: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 3 10:47:17.950: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 3 10:47:17.995: INFO: Number of nodes with available pods: 0 Feb 3 10:47:17.995: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:20.825: INFO: Number of nodes with available pods: 0 Feb 3 10:47:20.825: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:21.074: INFO: Number of nodes with available pods: 0 Feb 3 10:47:21.074: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:22.059: INFO: Number of nodes with available pods: 0 Feb 3 10:47:22.059: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:23.074: INFO: Number of nodes with available pods: 0 Feb 3 10:47:23.074: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:24.344: INFO: Number of nodes with available pods: 0 Feb 3 10:47:24.344: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:25.129: INFO: Number of nodes with available pods: 0 Feb 3 10:47:25.129: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:26.012: INFO: Number of nodes with available pods: 0 Feb 3 10:47:26.012: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:27.066: INFO: Number of nodes with available pods: 0 Feb 3 10:47:27.066: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:28.045: INFO: Number of nodes with available pods: 1 Feb 3 10:47:28.045: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 3 10:47:28.120: INFO: Wrong image for pod: daemon-set-4vlhf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 3 10:47:29.163: INFO: Wrong image for pod: daemon-set-4vlhf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 3 10:47:30.157: INFO: Wrong image for pod: daemon-set-4vlhf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 3 10:47:31.321: INFO: Wrong image for pod: daemon-set-4vlhf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 3 10:47:32.161: INFO: Wrong image for pod: daemon-set-4vlhf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 3 10:47:33.193: INFO: Wrong image for pod: daemon-set-4vlhf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 3 10:47:34.181: INFO: Wrong image for pod: daemon-set-4vlhf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 3 10:47:35.151: INFO: Wrong image for pod: daemon-set-4vlhf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 3 10:47:35.151: INFO: Pod daemon-set-4vlhf is not available Feb 3 10:47:36.150: INFO: Pod daemon-set-v4smh is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 3 10:47:36.185: INFO: Number of nodes with available pods: 0 Feb 3 10:47:36.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:37.518: INFO: Number of nodes with available pods: 0 Feb 3 10:47:37.518: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:38.545: INFO: Number of nodes with available pods: 0 Feb 3 10:47:38.546: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:39.230: INFO: Number of nodes with available pods: 0 Feb 3 10:47:39.230: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:40.215: INFO: Number of nodes with available pods: 0 Feb 3 10:47:40.215: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:41.201: INFO: Number of nodes with available pods: 0 Feb 3 10:47:41.201: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:42.340: INFO: Number of nodes with available pods: 0 Feb 3 10:47:42.340: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:43.282: INFO: Number of nodes with available pods: 0 Feb 3 10:47:43.282: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 3 10:47:44.199: INFO: Number of nodes with available pods: 1 Feb 3 10:47:44.199: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lkbwd, will wait for the garbage collector to delete the pods Feb 3 10:47:44.314: INFO: Deleting DaemonSet.extensions daemon-set took: 18.634858ms Feb 3 10:47:44.414: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.320558ms Feb 3 10:48:02.746: INFO: Number of nodes with available pods: 0 Feb 3 10:48:02.746: INFO: Number of running nodes: 0, number of available pods: 0 Feb 3 10:48:02.751: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lkbwd/daemonsets","resourceVersion":"20404126"},"items":null} Feb 3 10:48:02.759: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lkbwd/pods","resourceVersion":"20404126"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 10:48:02.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-lkbwd" for this suite. Feb 3 10:48:10.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 10:48:10.955: INFO: namespace: e2e-tests-daemonsets-lkbwd, resource: bindings, ignored listing per whitelist Feb 3 10:48:11.054: INFO: namespace e2e-tests-daemonsets-lkbwd deletion completed in 8.280256006s • [SLOW TEST:53.395 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 10:48:11.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 10:49:13.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-zbjck" for this suite. Feb 3 10:49:19.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 10:49:19.624: INFO: namespace: e2e-tests-container-runtime-zbjck, resource: bindings, ignored listing per whitelist Feb 3 10:49:19.629: INFO: namespace e2e-tests-container-runtime-zbjck deletion completed in 6.227384447s • [SLOW TEST:68.575 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 10:49:19.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-d46v4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-d46v4 to expose endpoints map[] Feb 3 10:49:19.994: INFO: Get endpoints failed (90.31052ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 3 10:49:21.010: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-d46v4 exposes endpoints map[] (1.106044186s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-d46v4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-d46v4 to expose endpoints map[pod1:[80]] Feb 3 10:49:26.501: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.467998932s elapsed, will retry) Feb 3 10:49:30.749: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-d46v4 exposes endpoints map[pod1:[80]] (9.716657972s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-d46v4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-d46v4 to expose endpoints map[pod1:[80] pod2:[80]] Feb 3 10:49:36.933: INFO: Unexpected endpoints: found map[d57d49da-4672-11ea-a994-fa163e34d433:[80]], expected map[pod2:[80] pod1:[80]] (6.163128906s elapsed, will retry) Feb 3 10:49:40.171: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-d46v4 exposes endpoints map[pod1:[80] pod2:[80]] (9.4014372s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-d46v4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-d46v4 to expose endpoints map[pod2:[80]] Feb 3 10:49:41.589: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-d46v4 exposes endpoints map[pod2:[80]] (1.407880526s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-d46v4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-d46v4 to expose endpoints map[] Feb 3 10:49:42.679: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-d46v4 exposes endpoints map[] (1.054793714s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 10:49:43.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-d46v4" for this suite. Feb 3 10:50:08.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 10:50:08.121: INFO: namespace: e2e-tests-services-d46v4, resource: bindings, ignored listing per whitelist Feb 3 10:50:08.224: INFO: namespace e2e-tests-services-d46v4 deletion completed in 24.232136457s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:48.594 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 10:50:08.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Feb 3 10:50:08.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2swql' Feb 3 10:50:10.902: INFO: stderr: "" Feb 3 10:50:10.902: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Feb 3 10:50:11.916: INFO: Selector matched 1 pods for map[app:redis] Feb 3 10:50:11.916: INFO: Found 0 / 1 Feb 3 10:50:12.917: INFO: Selector matched 1 pods for map[app:redis] Feb 3 10:50:12.917: INFO: Found 0 / 1 Feb 3 10:50:13.960: INFO: Selector matched 1 pods for map[app:redis] Feb 3 10:50:13.961: INFO: Found 0 / 1 Feb 3 10:50:14.919: INFO: Selector matched 1 pods for map[app:redis] Feb 3 10:50:14.919: INFO: Found 0 / 1 Feb 3 10:50:15.926: INFO: Selector matched 1 pods for map[app:redis] Feb 3 10:50:15.926: INFO: Found 0 / 1 Feb 3 10:50:17.131: INFO: Selector matched 1 pods for map[app:redis] Feb 3 10:50:17.131: INFO: Found 0 / 1 Feb 3 10:50:17.918: INFO: Selector matched 1 pods for map[app:redis] Feb 3 10:50:17.919: INFO: Found 0 / 1 Feb 3 10:50:18.978: INFO: Selector matched 1 pods for map[app:redis] Feb 3 10:50:18.978: INFO: Found 0 / 1 Feb 3 10:50:19.920: INFO: Selector matched 1 pods for map[app:redis] Feb 3 10:50:19.920: INFO: Found 0 / 1 Feb 3 10:50:20.925: INFO: Selector matched 1 pods for map[app:redis] Feb 3 10:50:20.925: INFO: Found 1 / 1 Feb 3 10:50:20.925: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 3 10:50:20.938: INFO: Selector matched 1 pods for map[app:redis] Feb 3 10:50:20.938: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 3 10:50:20.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gzns6 redis-master --namespace=e2e-tests-kubectl-2swql' Feb 3 10:50:21.097: INFO: stderr: "" Feb 3 10:50:21.097: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 03 Feb 10:50:19.241 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 Feb 10:50:19.241 # Server started, Redis version 3.2.12\n1:M 03 Feb 10:50:19.241 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 Feb 10:50:19.241 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 3 10:50:21.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gzns6 redis-master --namespace=e2e-tests-kubectl-2swql --tail=1' Feb 3 10:50:21.289: INFO: stderr: "" Feb 3 10:50:21.289: INFO: stdout: "1:M 03 Feb 10:50:19.241 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 3 10:50:21.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gzns6 redis-master --namespace=e2e-tests-kubectl-2swql --limit-bytes=1' Feb 3 10:50:21.429: INFO: stderr: "" Feb 3 10:50:21.429: INFO: stdout: " " STEP: exposing timestamps Feb 3 10:50:21.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gzns6 redis-master --namespace=e2e-tests-kubectl-2swql --tail=1 --timestamps' Feb 3 10:50:21.583: INFO: stderr: "" Feb 3 10:50:21.583: INFO: stdout: "2020-02-03T10:50:19.242372761Z 1:M 03 Feb 10:50:19.241 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 3 10:50:24.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gzns6 redis-master --namespace=e2e-tests-kubectl-2swql --since=1s' Feb 3 10:50:24.245: INFO: stderr: "" Feb 3 10:50:24.245: INFO: stdout: "" Feb 3 10:50:24.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gzns6 redis-master --namespace=e2e-tests-kubectl-2swql --since=24h' Feb 3 10:50:24.401: INFO: stderr: "" Feb 3 10:50:24.402: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 03 Feb 10:50:19.241 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 Feb 10:50:19.241 # Server started, Redis version 3.2.12\n1:M 03 Feb 10:50:19.241 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 Feb 10:50:19.241 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Feb 3 10:50:24.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2swql' Feb 3 10:50:24.569: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 3 10:50:24.569: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 3 10:50:24.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-2swql' Feb 3 10:50:24.715: INFO: stderr: "No resources found.\n" Feb 3 10:50:24.715: INFO: stdout: "" Feb 3 10:50:24.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-2swql -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 3 10:50:24.874: INFO: stderr: "" Feb 3 10:50:24.874: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 10:50:24.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2swql" for this suite. Feb 3 10:50:48.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 10:50:49.055: INFO: namespace: e2e-tests-kubectl-2swql, resource: bindings, ignored listing per whitelist Feb 3 10:50:49.118: INFO: namespace e2e-tests-kubectl-2swql deletion completed in 24.223744579s • [SLOW TEST:40.893 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 10:50:49.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 3 10:51:11.753: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 10:51:11.793: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 10:51:13.793: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 10:51:13.822: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 10:51:15.793: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 10:51:15.811: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 10:51:17.793: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 10:51:17.809: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 10:51:19.793: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 10:51:19.812: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 10:51:21.793: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 10:51:21.812: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 10:51:23.793: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 10:51:23.832: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 10:51:23.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-cdlbw" for this suite. Feb 3 10:51:47.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 10:51:48.076: INFO: namespace: e2e-tests-container-lifecycle-hook-cdlbw, resource: bindings, ignored listing per whitelist Feb 3 10:51:48.166: INFO: namespace e2e-tests-container-lifecycle-hook-cdlbw deletion completed in 24.284851102s • [SLOW TEST:59.048 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 10:51:48.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 3 10:51:48.394: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d525c50-4673-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-6rsfm" to be "success or failure" Feb 3 10:51:48.406: INFO: Pod "downwardapi-volume-2d525c50-4673-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.58888ms Feb 3 10:51:50.419: INFO: Pod "downwardapi-volume-2d525c50-4673-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024889238s Feb 3 10:51:52.435: INFO: Pod "downwardapi-volume-2d525c50-4673-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040433028s Feb 3 10:51:54.482: INFO: Pod "downwardapi-volume-2d525c50-4673-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087509961s Feb 3 10:51:56.696: INFO: Pod "downwardapi-volume-2d525c50-4673-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.301391912s Feb 3 10:51:58.749: INFO: Pod "downwardapi-volume-2d525c50-4673-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.355002237s STEP: Saw pod success Feb 3 10:51:58.749: INFO: Pod "downwardapi-volume-2d525c50-4673-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 10:51:58.758: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2d525c50-4673-11ea-ab15-0242ac110005 container client-container: STEP: delete the pod Feb 3 10:51:58.919: INFO: Waiting for pod downwardapi-volume-2d525c50-4673-11ea-ab15-0242ac110005 to disappear Feb 3 10:51:58.941: INFO: Pod downwardapi-volume-2d525c50-4673-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 10:51:58.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6rsfm" for this suite. Feb 3 10:52:05.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 10:52:05.072: INFO: namespace: e2e-tests-downward-api-6rsfm, resource: bindings, ignored listing per whitelist Feb 3 10:52:05.214: INFO: namespace e2e-tests-downward-api-6rsfm deletion completed in 6.26026349s • [SLOW TEST:17.048 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 10:52:05.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 3 10:55:08.972: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:09.053: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:11.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:11.069: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:13.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:13.072: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:15.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:15.070: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:17.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:17.062: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:19.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:19.065: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:21.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:21.065: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:23.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:23.073: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:25.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:25.084: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:27.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:27.078: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:29.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:29.073: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:31.054: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:31.075: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:33.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:33.075: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:35.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:35.072: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:37.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:37.073: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:39.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:39.115: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:41.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:41.071: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:43.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:43.075: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:45.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:45.124: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:47.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:47.070: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:49.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:49.072: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:51.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:51.064: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:53.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:53.072: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:55.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:55.071: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:57.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:57.084: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:55:59.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:55:59.068: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:01.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:01.073: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:03.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:03.068: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:05.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:05.072: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:07.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:07.074: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:09.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:09.073: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:11.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:11.081: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:13.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:13.073: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:15.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:15.074: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:17.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:17.071: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:19.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:19.070: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:21.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:21.070: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:23.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:23.070: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:25.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:25.075: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:27.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:27.568: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:29.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:29.270: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:31.054: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:31.123: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:33.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:33.073: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:35.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:35.068: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:37.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:37.069: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:39.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:39.079: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:41.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:41.072: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 10:56:43.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 10:56:43.070: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 10:56:43.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-m4qjc" for this suite. Feb 3 10:57:07.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 10:57:07.223: INFO: namespace: e2e-tests-container-lifecycle-hook-m4qjc, resource: bindings, ignored listing per whitelist Feb 3 10:57:07.282: INFO: namespace e2e-tests-container-lifecycle-hook-m4qjc deletion completed in 24.204411031s • [SLOW TEST:302.068 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 10:57:07.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Feb 3 10:57:07.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:07.968: INFO: stderr: "" Feb 3 10:57:07.968: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 10:57:07.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:08.184: INFO: stderr: "" Feb 3 10:57:08.184: INFO: stdout: "update-demo-nautilus-lpbvc update-demo-nautilus-mzqrx " Feb 3 10:57:08.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lpbvc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:08.351: INFO: stderr: "" Feb 3 10:57:08.351: INFO: stdout: "" Feb 3 10:57:08.351: INFO: update-demo-nautilus-lpbvc is created but not running Feb 3 10:57:13.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:14.585: INFO: stderr: "" Feb 3 10:57:14.586: INFO: stdout: "update-demo-nautilus-lpbvc update-demo-nautilus-mzqrx " Feb 3 10:57:14.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lpbvc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:15.194: INFO: stderr: "" Feb 3 10:57:15.194: INFO: stdout: "" Feb 3 10:57:15.194: INFO: update-demo-nautilus-lpbvc is created but not running Feb 3 10:57:20.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:20.464: INFO: stderr: "" Feb 3 10:57:20.464: INFO: stdout: "update-demo-nautilus-lpbvc update-demo-nautilus-mzqrx " Feb 3 10:57:20.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lpbvc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:20.605: INFO: stderr: "" Feb 3 10:57:20.605: INFO: stdout: "true" Feb 3 10:57:20.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lpbvc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:20.697: INFO: stderr: "" Feb 3 10:57:20.697: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 10:57:20.697: INFO: validating pod update-demo-nautilus-lpbvc Feb 3 10:57:20.727: INFO: got data: { "image": "nautilus.jpg" } Feb 3 10:57:20.727: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 10:57:20.727: INFO: update-demo-nautilus-lpbvc is verified up and running Feb 3 10:57:20.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mzqrx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:20.857: INFO: stderr: "" Feb 3 10:57:20.857: INFO: stdout: "true" Feb 3 10:57:20.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mzqrx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:20.979: INFO: stderr: "" Feb 3 10:57:20.979: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 10:57:20.979: INFO: validating pod update-demo-nautilus-mzqrx Feb 3 10:57:20.988: INFO: got data: { "image": "nautilus.jpg" } Feb 3 10:57:20.988: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 10:57:20.988: INFO: update-demo-nautilus-mzqrx is verified up and running STEP: rolling-update to new replication controller Feb 3 10:57:20.991: INFO: scanned /root for discovery docs: Feb 3 10:57:20.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:56.494: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 3 10:57:56.495: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 10:57:56.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:56.760: INFO: stderr: "" Feb 3 10:57:56.760: INFO: stdout: "update-demo-kitten-8dh57 update-demo-kitten-qq24d " Feb 3 10:57:56.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8dh57 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:57.065: INFO: stderr: "" Feb 3 10:57:57.065: INFO: stdout: "true" Feb 3 10:57:57.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8dh57 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:57.243: INFO: stderr: "" Feb 3 10:57:57.243: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 3 10:57:57.243: INFO: validating pod update-demo-kitten-8dh57 Feb 3 10:57:57.441: INFO: got data: { "image": "kitten.jpg" } Feb 3 10:57:57.441: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 3 10:57:57.441: INFO: update-demo-kitten-8dh57 is verified up and running Feb 3 10:57:57.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qq24d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:57.548: INFO: stderr: "" Feb 3 10:57:57.548: INFO: stdout: "true" Feb 3 10:57:57.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qq24d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-98wqv' Feb 3 10:57:57.665: INFO: stderr: "" Feb 3 10:57:57.665: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 3 10:57:57.665: INFO: validating pod update-demo-kitten-qq24d Feb 3 10:57:57.683: INFO: got data: { "image": "kitten.jpg" } Feb 3 10:57:57.684: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 3 10:57:57.684: INFO: update-demo-kitten-qq24d is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 10:57:57.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-98wqv" for this suite. Feb 3 10:58:37.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 10:58:37.833: INFO: namespace: e2e-tests-kubectl-98wqv, resource: bindings, ignored listing per whitelist Feb 3 10:58:37.964: INFO: namespace e2e-tests-kubectl-98wqv deletion completed in 40.241942427s • [SLOW TEST:90.682 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 10:58:37.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 3 10:58:38.084: INFO: Creating deployment "test-recreate-deployment" Feb 3 10:58:38.096: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 3 10:58:38.167: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Feb 3 10:58:40.553: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 3 10:58:40.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 10:58:42.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 10:58:44.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 10:58:46.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 10:58:48.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716324318, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 10:58:50.621: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 3 10:58:50.801: INFO: Updating deployment test-recreate-deployment Feb 3 10:58:50.802: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 3 10:58:52.852: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-sckbh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sckbh/deployments/test-recreate-deployment,UID:2186d192-4674-11ea-a994-fa163e34d433,ResourceVersion:20405350,Generation:2,CreationTimestamp:2020-02-03 10:58:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-03 10:58:51 +0000 UTC 2020-02-03 10:58:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-03 10:58:52 +0000 UTC 2020-02-03 10:58:38 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 3 10:58:52.884: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-sckbh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sckbh/replicasets/test-recreate-deployment-589c4bfd,UID:298c7ea0-4674-11ea-a994-fa163e34d433,ResourceVersion:20405346,Generation:1,CreationTimestamp:2020-02-03 10:58:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 2186d192-4674-11ea-a994-fa163e34d433 0xc0017c6b7f 0xc0017c6b90}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 3 10:58:52.884: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 3 10:58:52.885: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-sckbh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sckbh/replicasets/test-recreate-deployment-5bf7f65dc,UID:2193bd4d-4674-11ea-a994-fa163e34d433,ResourceVersion:20405339,Generation:2,CreationTimestamp:2020-02-03 10:58:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 2186d192-4674-11ea-a994-fa163e34d433 0xc0017c6c50 0xc0017c6c51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 3 10:58:53.465: INFO: Pod "test-recreate-deployment-589c4bfd-mbps9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-mbps9,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-sckbh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-sckbh/pods/test-recreate-deployment-589c4bfd-mbps9,UID:2994d877-4674-11ea-a994-fa163e34d433,ResourceVersion:20405351,Generation:0,CreationTimestamp:2020-02-03 10:58:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 298c7ea0-4674-11ea-a994-fa163e34d433 0xc0017c75af 0xc0017c75c0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-sl2zf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sl2zf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-sl2zf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017c7620} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017c7640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 10:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 10:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 10:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 10:58:51 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-03 10:58:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 10:58:53.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-sckbh" for this suite. Feb 3 10:59:03.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 10:59:03.934: INFO: namespace: e2e-tests-deployment-sckbh, resource: bindings, ignored listing per whitelist Feb 3 10:59:04.276: INFO: namespace e2e-tests-deployment-sckbh deletion completed in 10.77134862s • [SLOW TEST:26.312 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 10:59:04.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-t2xd4.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-t2xd4.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-t2xd4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-t2xd4.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-t2xd4.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-t2xd4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 3 10:59:18.660: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-t2xd4/dns-test-314c25ac-4674-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-314c25ac-4674-11ea-ab15-0242ac110005) Feb 3 10:59:18.665: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-t2xd4/dns-test-314c25ac-4674-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-314c25ac-4674-11ea-ab15-0242ac110005) Feb 3 10:59:18.670: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-t2xd4/dns-test-314c25ac-4674-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-314c25ac-4674-11ea-ab15-0242ac110005) Feb 3 10:59:18.676: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-t2xd4/dns-test-314c25ac-4674-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-314c25ac-4674-11ea-ab15-0242ac110005) Feb 3 10:59:18.681: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-t2xd4/dns-test-314c25ac-4674-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-314c25ac-4674-11ea-ab15-0242ac110005) Feb 3 10:59:18.685: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-t2xd4/dns-test-314c25ac-4674-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-314c25ac-4674-11ea-ab15-0242ac110005) Feb 3 10:59:18.690: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-t2xd4.svc.cluster.local from pod e2e-tests-dns-t2xd4/dns-test-314c25ac-4674-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-314c25ac-4674-11ea-ab15-0242ac110005) Feb 3 10:59:18.696: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-t2xd4/dns-test-314c25ac-4674-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-314c25ac-4674-11ea-ab15-0242ac110005) Feb 3 10:59:18.702: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-t2xd4/dns-test-314c25ac-4674-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-314c25ac-4674-11ea-ab15-0242ac110005) Feb 3 10:59:18.705: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-t2xd4/dns-test-314c25ac-4674-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-314c25ac-4674-11ea-ab15-0242ac110005) Feb 3 10:59:18.738: INFO: Lookups using e2e-tests-dns-t2xd4/dns-test-314c25ac-4674-11ea-ab15-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-t2xd4.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord] Feb 3 10:59:23.945: INFO: DNS probes using e2e-tests-dns-t2xd4/dns-test-314c25ac-4674-11ea-ab15-0242ac110005 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 10:59:24.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-t2xd4" for this suite. Feb 3 10:59:30.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 10:59:30.467: INFO: namespace: e2e-tests-dns-t2xd4, resource: bindings, ignored listing per whitelist Feb 3 10:59:30.606: INFO: namespace e2e-tests-dns-t2xd4 deletion completed in 6.361685321s • [SLOW TEST:26.330 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 10:59:30.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6ljt7 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 3 10:59:30.886: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 3 11:00:03.324: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-6ljt7 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 11:00:03.324: INFO: >>> kubeConfig: /root/.kube/config I0203 11:00:03.435376 8 log.go:172] (0xc0016044d0) (0xc001bdc500) Create stream I0203 11:00:03.435711 8 log.go:172] (0xc0016044d0) (0xc001bdc500) Stream added, broadcasting: 1 I0203 11:00:03.443492 8 log.go:172] (0xc0016044d0) Reply frame received for 1 I0203 11:00:03.443540 8 log.go:172] (0xc0016044d0) (0xc0019026e0) Create stream I0203 11:00:03.443555 8 log.go:172] (0xc0016044d0) (0xc0019026e0) Stream added, broadcasting: 3 I0203 11:00:03.445218 8 log.go:172] (0xc0016044d0) Reply frame received for 3 I0203 11:00:03.445259 8 log.go:172] (0xc0016044d0) (0xc001bdc5a0) Create stream I0203 11:00:03.445274 8 log.go:172] (0xc0016044d0) (0xc001bdc5a0) Stream added, broadcasting: 5 I0203 11:00:03.446541 8 log.go:172] (0xc0016044d0) Reply frame received for 5 I0203 11:00:04.627466 8 log.go:172] (0xc0016044d0) Data frame received for 3 I0203 11:00:04.627716 8 log.go:172] (0xc0019026e0) (3) Data frame handling I0203 11:00:04.627764 8 log.go:172] (0xc0019026e0) (3) Data frame sent I0203 11:00:04.853257 8 log.go:172] (0xc0016044d0) Data frame received for 1 I0203 11:00:04.853505 8 log.go:172] (0xc0016044d0) (0xc0019026e0) Stream removed, broadcasting: 3 I0203 11:00:04.853633 8 log.go:172] (0xc001bdc500) (1) Data frame handling I0203 11:00:04.853696 8 log.go:172] (0xc001bdc500) (1) Data frame sent I0203 11:00:04.853724 8 log.go:172] (0xc0016044d0) (0xc001bdc500) Stream removed, broadcasting: 1 I0203 11:00:04.853788 8 log.go:172] (0xc0016044d0) (0xc001bdc5a0) Stream removed, broadcasting: 5 I0203 11:00:04.854092 8 log.go:172] (0xc0016044d0) Go away received I0203 11:00:04.854341 8 log.go:172] (0xc0016044d0) (0xc001bdc500) Stream removed, broadcasting: 1 I0203 11:00:04.854365 8 log.go:172] (0xc0016044d0) (0xc0019026e0) Stream removed, broadcasting: 3 I0203 11:00:04.854378 8 log.go:172] (0xc0016044d0) (0xc001bdc5a0) Stream removed, broadcasting: 5 Feb 3 11:00:04.854: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:00:04.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-6ljt7" for this suite. Feb 3 11:00:28.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:00:28.987: INFO: namespace: e2e-tests-pod-network-test-6ljt7, resource: bindings, ignored listing per whitelist Feb 3 11:00:29.222: INFO: namespace e2e-tests-pod-network-test-6ljt7 deletion completed in 24.303826203s • [SLOW TEST:58.616 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:00:29.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-hrklr [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-hrklr STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-hrklr STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-hrklr STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-hrklr Feb 3 11:00:41.627: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-hrklr, name: ss-0, uid: 6ada2dbe-4674-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Feb 3 11:00:42.581: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-hrklr, name: ss-0, uid: 6ada2dbe-4674-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 3 11:00:42.696: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-hrklr, name: ss-0, uid: 6ada2dbe-4674-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 3 11:00:42.739: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-hrklr STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-hrklr STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-hrklr and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 3 11:00:55.446: INFO: Deleting all statefulset in ns e2e-tests-statefulset-hrklr Feb 3 11:00:55.467: INFO: Scaling statefulset ss to 0 Feb 3 11:01:05.600: INFO: Waiting for statefulset status.replicas updated to 0 Feb 3 11:01:05.615: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:01:05.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-hrklr" for this suite. Feb 3 11:01:13.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:01:14.111: INFO: namespace: e2e-tests-statefulset-hrklr, resource: bindings, ignored listing per whitelist Feb 3 11:01:14.197: INFO: namespace e2e-tests-statefulset-hrklr deletion completed in 8.429087174s • [SLOW TEST:44.974 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:01:14.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:01:20.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-cvrbx" for this suite. Feb 3 11:01:26.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:01:27.170: INFO: namespace: e2e-tests-namespaces-cvrbx, resource: bindings, ignored listing per whitelist Feb 3 11:01:27.172: INFO: namespace e2e-tests-namespaces-cvrbx deletion completed in 6.261064826s STEP: Destroying namespace "e2e-tests-nsdeletetest-fz6v5" for this suite. Feb 3 11:01:27.175: INFO: Namespace e2e-tests-nsdeletetest-fz6v5 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-26dth" for this suite. Feb 3 11:01:33.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:01:33.366: INFO: namespace: e2e-tests-nsdeletetest-26dth, resource: bindings, ignored listing per whitelist Feb 3 11:01:33.374: INFO: namespace e2e-tests-nsdeletetest-26dth deletion completed in 6.198774925s • [SLOW TEST:19.177 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:01:33.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Feb 3 11:01:33.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-q6jk9' Feb 3 11:01:35.654: INFO: stderr: "" Feb 3 11:01:35.654: INFO: stdout: "pod/pause created\n" Feb 3 11:01:35.654: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 3 11:01:35.655: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-q6jk9" to be "running and ready" Feb 3 11:01:35.676: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 21.390096ms Feb 3 11:01:37.696: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040723381s Feb 3 11:01:40.733: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.078163796s Feb 3 11:01:42.755: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.10016841s Feb 3 11:01:44.804: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 9.148817066s Feb 3 11:01:44.804: INFO: Pod "pause" satisfied condition "running and ready" Feb 3 11:01:44.804: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Feb 3 11:01:44.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-q6jk9' Feb 3 11:01:45.035: INFO: stderr: "" Feb 3 11:01:45.035: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 3 11:01:45.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-q6jk9' Feb 3 11:01:45.189: INFO: stderr: "" Feb 3 11:01:45.189: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 3 11:01:45.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-q6jk9' Feb 3 11:01:45.321: INFO: stderr: "" Feb 3 11:01:45.321: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 3 11:01:45.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-q6jk9' Feb 3 11:01:45.449: INFO: stderr: "" Feb 3 11:01:45.449: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Feb 3 11:01:45.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-q6jk9' Feb 3 11:01:45.614: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 3 11:01:45.614: INFO: stdout: "pod \"pause\" force deleted\n" Feb 3 11:01:45.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-q6jk9' Feb 3 11:01:45.775: INFO: stderr: "No resources found.\n" Feb 3 11:01:45.775: INFO: stdout: "" Feb 3 11:01:45.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-q6jk9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 3 11:01:46.014: INFO: stderr: "" Feb 3 11:01:46.014: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:01:46.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q6jk9" for this suite. Feb 3 11:01:52.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:01:52.745: INFO: namespace: e2e-tests-kubectl-q6jk9, resource: bindings, ignored listing per whitelist Feb 3 11:01:52.803: INFO: namespace e2e-tests-kubectl-q6jk9 deletion completed in 6.75279452s • [SLOW TEST:19.429 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:01:52.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 3 11:01:53.041: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 11:01:53.199: INFO: Waiting for terminating namespaces to be deleted... Feb 3 11:01:53.203: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 3 11:01:53.221: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 3 11:01:53.221: INFO: Container weave ready: true, restart count 0 Feb 3 11:01:53.221: INFO: Container weave-npc ready: true, restart count 0 Feb 3 11:01:53.221: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 3 11:01:53.221: INFO: Container coredns ready: true, restart count 0 Feb 3 11:01:53.221: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 3 11:01:53.221: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 3 11:01:53.221: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 3 11:01:53.221: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 3 11:01:53.221: INFO: Container coredns ready: true, restart count 0 Feb 3 11:01:53.221: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 3 11:01:53.221: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 11:01:53.221: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9bd8ebe6-4674-11ea-ab15-0242ac110005 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-9bd8ebe6-4674-11ea-ab15-0242ac110005 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-9bd8ebe6-4674-11ea-ab15-0242ac110005 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:02:17.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-qtbkm" for this suite. Feb 3 11:02:35.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:02:35.961: INFO: namespace: e2e-tests-sched-pred-qtbkm, resource: bindings, ignored listing per whitelist Feb 3 11:02:36.097: INFO: namespace e2e-tests-sched-pred-qtbkm deletion completed in 18.414451061s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:43.293 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:02:36.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-9l9ph STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-9l9ph STEP: Deleting pre-stop pod Feb 3 11:03:01.710: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:03:01.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-9l9ph" for this suite. Feb 3 11:03:43.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:03:44.059: INFO: namespace: e2e-tests-prestop-9l9ph, resource: bindings, ignored listing per whitelist Feb 3 11:03:44.147: INFO: namespace e2e-tests-prestop-9l9ph deletion completed in 42.333247787s • [SLOW TEST:68.049 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:03:44.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-d81d05bb-4674-11ea-ab15-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 3 11:03:44.516: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d81de455-4674-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-p5bct" to be "success or failure" Feb 3 11:03:44.534: INFO: Pod "pod-projected-configmaps-d81de455-4674-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.981249ms Feb 3 11:03:46.558: INFO: Pod "pod-projected-configmaps-d81de455-4674-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041868978s Feb 3 11:03:48.590: INFO: Pod "pod-projected-configmaps-d81de455-4674-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073416164s Feb 3 11:03:50.613: INFO: Pod "pod-projected-configmaps-d81de455-4674-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096319724s Feb 3 11:03:52.861: INFO: Pod "pod-projected-configmaps-d81de455-4674-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.344400968s Feb 3 11:03:54.870: INFO: Pod "pod-projected-configmaps-d81de455-4674-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.35330767s STEP: Saw pod success Feb 3 11:03:54.870: INFO: Pod "pod-projected-configmaps-d81de455-4674-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:03:54.876: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d81de455-4674-11ea-ab15-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 3 11:03:55.833: INFO: Waiting for pod pod-projected-configmaps-d81de455-4674-11ea-ab15-0242ac110005 to disappear Feb 3 11:03:55.980: INFO: Pod pod-projected-configmaps-d81de455-4674-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:03:55.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p5bct" for this suite. Feb 3 11:04:02.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:04:02.110: INFO: namespace: e2e-tests-projected-p5bct, resource: bindings, ignored listing per whitelist Feb 3 11:04:02.159: INFO: namespace e2e-tests-projected-p5bct deletion completed in 6.169149651s • [SLOW TEST:18.012 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:04:02.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 3 11:04:02.348: INFO: Creating deployment "nginx-deployment" Feb 3 11:04:02.373: INFO: Waiting for observed generation 1 Feb 3 11:04:04.423: INFO: Waiting for all required pods to come up Feb 3 11:04:04.987: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 3 11:04:43.028: INFO: Waiting for deployment "nginx-deployment" to complete Feb 3 11:04:43.041: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 3 11:04:43.061: INFO: Updating deployment nginx-deployment Feb 3 11:04:43.061: INFO: Waiting for observed generation 2 Feb 3 11:04:45.091: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 3 11:04:45.096: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 3 11:04:45.100: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 3 11:04:45.123: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 3 11:04:45.123: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 3 11:04:45.127: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 3 11:04:45.136: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 3 11:04:45.136: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 3 11:04:45.160: INFO: Updating deployment nginx-deployment Feb 3 11:04:45.160: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 3 11:04:48.297: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 3 11:04:52.756: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 3 11:04:54.540: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-t9nlh/deployments/nginx-deployment,UID:e2ce4858-4674-11ea-a994-fa163e34d433,ResourceVersion:20406428,Generation:3,CreationTimestamp:2020-02-03 11:04:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-03 11:04:44 +0000 UTC 2020-02-03 11:04:02 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-03 11:04:48 +0000 UTC 2020-02-03 11:04:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 3 11:04:55.330: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-t9nlh/replicasets/nginx-deployment-5c98f8fb5,UID:fb12ac8a-4674-11ea-a994-fa163e34d433,ResourceVersion:20406434,Generation:3,CreationTimestamp:2020-02-03 11:04:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e2ce4858-4674-11ea-a994-fa163e34d433 0xc001e99b37 0xc001e99b38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 3 11:04:55.330: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 3 11:04:55.330: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-t9nlh/replicasets/nginx-deployment-85ddf47c5d,UID:e2d4ee3a-4674-11ea-a994-fa163e34d433,ResourceVersion:20406423,Generation:3,CreationTimestamp:2020-02-03 11:04:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e2ce4858-4674-11ea-a994-fa163e34d433 0xc001e99bf7 0xc001e99bf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 3 11:04:56.654: INFO: Pod "nginx-deployment-5c98f8fb5-4hfq2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4hfq2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-5c98f8fb5-4hfq2,UID:fb2247ae-4674-11ea-a994-fa163e34d433,ResourceVersion:20406338,Generation:0,CreationTimestamp:2020-02-03 11:04:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fb12ac8a-4674-11ea-a994-fa163e34d433 0xc001f06327 0xc001f06328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f06390} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f063b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-03 11:04:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.655: INFO: Pod "nginx-deployment-5c98f8fb5-57rwg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-57rwg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-5c98f8fb5-57rwg,UID:fb276f1d-4674-11ea-a994-fa163e34d433,ResourceVersion:20406357,Generation:0,CreationTimestamp:2020-02-03 11:04:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fb12ac8a-4674-11ea-a994-fa163e34d433 0xc001f06477 0xc001f06478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f064e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f06500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-03 11:04:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.655: INFO: Pod "nginx-deployment-5c98f8fb5-5sctl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5sctl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-5c98f8fb5-5sctl,UID:fb277a00-4674-11ea-a994-fa163e34d433,ResourceVersion:20406361,Generation:0,CreationTimestamp:2020-02-03 11:04:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fb12ac8a-4674-11ea-a994-fa163e34d433 0xc001f065c7 0xc001f065c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f06630} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f06650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-03 11:04:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.655: INFO: Pod "nginx-deployment-5c98f8fb5-8l79x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8l79x,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-5c98f8fb5-8l79x,UID:ffcc9a5b-4674-11ea-a994-fa163e34d433,ResourceVersion:20406414,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fb12ac8a-4674-11ea-a994-fa163e34d433 0xc001f06717 0xc001f06718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f06780} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f067a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.655: INFO: Pod "nginx-deployment-5c98f8fb5-c4hct" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-c4hct,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-5c98f8fb5-c4hct,UID:ffcc65a4-4674-11ea-a994-fa163e34d433,ResourceVersion:20406416,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fb12ac8a-4674-11ea-a994-fa163e34d433 0xc001f06817 0xc001f06818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f06880} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f068a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.656: INFO: Pod "nginx-deployment-5c98f8fb5-cm8tf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cm8tf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-5c98f8fb5-cm8tf,UID:ff8958bc-4674-11ea-a994-fa163e34d433,ResourceVersion:20406394,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fb12ac8a-4674-11ea-a994-fa163e34d433 0xc001f06917 0xc001f06918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f06980} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f069a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:50 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.656: INFO: Pod "nginx-deployment-5c98f8fb5-dmfcd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dmfcd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-5c98f8fb5-dmfcd,UID:fff7bad2-4674-11ea-a994-fa163e34d433,ResourceVersion:20406421,Generation:0,CreationTimestamp:2020-02-03 11:04:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fb12ac8a-4674-11ea-a994-fa163e34d433 0xc001f06a17 0xc001f06a18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f06a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f06aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.656: INFO: Pod "nginx-deployment-5c98f8fb5-jbz2c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jbz2c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-5c98f8fb5-jbz2c,UID:fb75b6f8-4674-11ea-a994-fa163e34d433,ResourceVersion:20406371,Generation:0,CreationTimestamp:2020-02-03 11:04:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fb12ac8a-4674-11ea-a994-fa163e34d433 0xc001f06b17 0xc001f06b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f06b80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f06ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-03 11:04:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.656: INFO: Pod "nginx-deployment-5c98f8fb5-l4c76" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l4c76,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-5c98f8fb5-l4c76,UID:ffcc438a-4674-11ea-a994-fa163e34d433,ResourceVersion:20406407,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fb12ac8a-4674-11ea-a994-fa163e34d433 0xc001f06c67 0xc001f06c68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f06cd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f06cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.657: INFO: Pod "nginx-deployment-5c98f8fb5-mt6r4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mt6r4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-5c98f8fb5-mt6r4,UID:ff8903cc-4674-11ea-a994-fa163e34d433,ResourceVersion:20406391,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fb12ac8a-4674-11ea-a994-fa163e34d433 0xc001f06d67 0xc001f06d68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f06dd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f06df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:50 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.657: INFO: Pod "nginx-deployment-5c98f8fb5-p2vzg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p2vzg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-5c98f8fb5-p2vzg,UID:fe74ac42-4674-11ea-a994-fa163e34d433,ResourceVersion:20406380,Generation:0,CreationTimestamp:2020-02-03 11:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fb12ac8a-4674-11ea-a994-fa163e34d433 0xc001f06e67 0xc001f06e68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f06ed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f06ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:50 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.657: INFO: Pod "nginx-deployment-5c98f8fb5-w4cbj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-w4cbj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-5c98f8fb5-w4cbj,UID:ffcc78db-4674-11ea-a994-fa163e34d433,ResourceVersion:20406415,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fb12ac8a-4674-11ea-a994-fa163e34d433 0xc001f06f67 0xc001f06f68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f06fd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f06ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.657: INFO: Pod "nginx-deployment-5c98f8fb5-z2cf5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z2cf5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-5c98f8fb5-z2cf5,UID:fb7f0d96-4674-11ea-a994-fa163e34d433,ResourceVersion:20406420,Generation:0,CreationTimestamp:2020-02-03 11:04:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fb12ac8a-4674-11ea-a994-fa163e34d433 0xc001f07067 0xc001f07068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f070d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f070f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:43 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-03 11:04:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.657: INFO: Pod "nginx-deployment-85ddf47c5d-5czsm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5czsm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-5czsm,UID:e3035ff8-4674-11ea-a994-fa163e34d433,ResourceVersion:20406282,Generation:0,CreationTimestamp:2020-02-03 11:04:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc001f071b7 0xc001f071b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f07220} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f07240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-03 11:04:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 11:04:38 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://277890e0d60d6faa3ae921b2708e9b6446be65325d674161692cff101dcebe6f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.658: INFO: Pod "nginx-deployment-85ddf47c5d-5qgjf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5qgjf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-5qgjf,UID:ffcaf591-4674-11ea-a994-fa163e34d433,ResourceVersion:20406410,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc001f07307 0xc001f07308}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f07370} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f07390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.658: INFO: Pod "nginx-deployment-85ddf47c5d-88shg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-88shg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-88shg,UID:fe73cc97-4674-11ea-a994-fa163e34d433,ResourceVersion:20406442,Generation:0,CreationTimestamp:2020-02-03 11:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc001f07407 0xc001f07408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f07470} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f07490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-03 11:04:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.658: INFO: Pod "nginx-deployment-85ddf47c5d-8zzfm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8zzfm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-8zzfm,UID:e2eb5d2d-4674-11ea-a994-fa163e34d433,ResourceVersion:20406293,Generation:0,CreationTimestamp:2020-02-03 11:04:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc001f07547 0xc001f07548}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f075b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f075d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-03 11:04:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 11:04:38 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://35dd14bbd685aeffe901d42a75ef16b3f876e474ada4b830d880cfdbe3a0eaf1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.658: INFO: Pod "nginx-deployment-85ddf47c5d-bwbdx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bwbdx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-bwbdx,UID:ffcc74c9-4674-11ea-a994-fa163e34d433,ResourceVersion:20406409,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc001f07697 0xc001f07698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f07700} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f07720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.659: INFO: Pod "nginx-deployment-85ddf47c5d-dqzxr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dqzxr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-dqzxr,UID:ffca57dd-4674-11ea-a994-fa163e34d433,ResourceVersion:20406413,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc001f07797 0xc001f07798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f07800} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f07820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.659: INFO: Pod "nginx-deployment-85ddf47c5d-frqnh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-frqnh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-frqnh,UID:e2ec809c-4674-11ea-a994-fa163e34d433,ResourceVersion:20406262,Generation:0,CreationTimestamp:2020-02-03 11:04:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc001f07897 0xc001f07898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f07900} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f07920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-03 11:04:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 11:04:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://57fa1fce4fa1855b4beb4a30f57fd3818cdb1710332707e6154728f7d87e7391}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.659: INFO: Pod "nginx-deployment-85ddf47c5d-gc7pb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gc7pb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-gc7pb,UID:ffcc4f6e-4674-11ea-a994-fa163e34d433,ResourceVersion:20406408,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc001f079e7 0xc001f079e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f07a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f07a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.659: INFO: Pod "nginx-deployment-85ddf47c5d-gk9p2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gk9p2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-gk9p2,UID:fe3874cf-4674-11ea-a994-fa163e34d433,ResourceVersion:20406431,Generation:0,CreationTimestamp:2020-02-03 11:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc001f07ae7 0xc001f07ae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f07b50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f07b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:48 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-03 11:04:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.659: INFO: Pod "nginx-deployment-85ddf47c5d-gt5cj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gt5cj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-gt5cj,UID:ff828d3f-4674-11ea-a994-fa163e34d433,ResourceVersion:20406386,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc001f07c27 0xc001f07c28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f07c90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f07cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:50 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.659: INFO: Pod "nginx-deployment-85ddf47c5d-h7z7w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-h7z7w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-h7z7w,UID:e2f5f91b-4674-11ea-a994-fa163e34d433,ResourceVersion:20406300,Generation:0,CreationTimestamp:2020-02-03 11:04:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc001f07d27 0xc001f07d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f07d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f07db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-03 11:04:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 11:04:38 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ad037f753804e7bcda5adc34bc97b98d95384aabf0bc9845981577670b8f7e4c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.660: INFO: Pod "nginx-deployment-85ddf47c5d-kzm5l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kzm5l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-kzm5l,UID:ffca967e-4674-11ea-a994-fa163e34d433,ResourceVersion:20406406,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc001f07e77 0xc001f07e78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f07ee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f07f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.660: INFO: Pod "nginx-deployment-85ddf47c5d-kzzdg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kzzdg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-kzzdg,UID:ff857c00-4674-11ea-a994-fa163e34d433,ResourceVersion:20406387,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc001f07f77 0xc001f07f78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f07fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019d2000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:50 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.660: INFO: Pod "nginx-deployment-85ddf47c5d-lzmtg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lzmtg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-lzmtg,UID:ff8742d7-4674-11ea-a994-fa163e34d433,ResourceVersion:20406390,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc0019d2077 0xc0019d2078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019d20e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019d2100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:50 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.660: INFO: Pod "nginx-deployment-85ddf47c5d-prqk9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-prqk9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-prqk9,UID:e2f62c1b-4674-11ea-a994-fa163e34d433,ResourceVersion:20406287,Generation:0,CreationTimestamp:2020-02-03 11:04:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc0019d2177 0xc0019d2178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019d21e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019d2200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-02-03 11:04:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 11:04:38 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b9c4133066ce26bab58d186f9e6ecb17d572ace46abf7c67a81bb334b599de1a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.660: INFO: Pod "nginx-deployment-85ddf47c5d-r9d66" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r9d66,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-r9d66,UID:e2e81123-4674-11ea-a994-fa163e34d433,ResourceVersion:20406255,Generation:0,CreationTimestamp:2020-02-03 11:04:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc0019d22c7 0xc0019d22c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019d2330} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019d2350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-03 11:04:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 11:04:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a280808c0a5f4604864259e6b91754ebc8c3dc0fdef1ee6d25fd5bd4e83a1ae9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.660: INFO: Pod "nginx-deployment-85ddf47c5d-sdtqc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sdtqc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-sdtqc,UID:fe74458e-4674-11ea-a994-fa163e34d433,ResourceVersion:20406374,Generation:0,CreationTimestamp:2020-02-03 11:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc0019d2417 0xc0019d2418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019d2480} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019d24a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.660: INFO: Pod "nginx-deployment-85ddf47c5d-wlpt6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wlpt6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-wlpt6,UID:e2f54989-4674-11ea-a994-fa163e34d433,ResourceVersion:20406279,Generation:0,CreationTimestamp:2020-02-03 11:04:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc0019d2517 0xc0019d2518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019d2580} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019d25a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-03 11:04:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 11:04:38 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a23c2026fb6bb2466a6ef361580352f6833c0aa03ffab41aebf4b9ab07756f12}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.660: INFO: Pod "nginx-deployment-85ddf47c5d-zp8rq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zp8rq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-zp8rq,UID:ff83f239-4674-11ea-a994-fa163e34d433,ResourceVersion:20406388,Generation:0,CreationTimestamp:2020-02-03 11:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc0019d2667 0xc0019d2668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019d26d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019d26f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:50 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 11:04:56.661: INFO: Pod "nginx-deployment-85ddf47c5d-zrv4q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zrv4q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-t9nlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t9nlh/pods/nginx-deployment-85ddf47c5d-zrv4q,UID:e303192a-4674-11ea-a994-fa163e34d433,ResourceVersion:20406290,Generation:0,CreationTimestamp:2020-02-03 11:04:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e2d4ee3a-4674-11ea-a994-fa163e34d433 0xc0019d2767 0xc0019d2768}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxjj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxjj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cxjj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019d27d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019d27f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:04:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-03 11:04:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 11:04:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://490c898dba9f5debfdd24ace813b3a43df6f7bbcdd666823cb3f1d78eb91f2a1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:04:56.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-t9nlh" for this suite. Feb 3 11:05:55.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:05:55.677: INFO: namespace: e2e-tests-deployment-t9nlh, resource: bindings, ignored listing per whitelist Feb 3 11:05:55.685: INFO: namespace e2e-tests-deployment-t9nlh deletion completed in 58.705025992s • [SLOW TEST:113.525 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:05:55.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-nppc STEP: Creating a pod to test atomic-volume-subpath Feb 3 11:05:57.765: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-nppc" in namespace "e2e-tests-subpath-hnwmr" to be "success or failure" Feb 3 11:05:58.532: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Pending", Reason="", readiness=false. Elapsed: 766.891867ms Feb 3 11:06:00.562: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.796162929s Feb 3 11:06:03.114: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.348723484s Feb 3 11:06:05.134: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.368441359s Feb 3 11:06:08.433: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.667005577s Feb 3 11:06:10.447: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.681894763s Feb 3 11:06:12.486: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.720747622s Feb 3 11:06:14.636: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.870391422s Feb 3 11:06:16.649: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.883369096s Feb 3 11:06:18.792: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Pending", Reason="", readiness=false. Elapsed: 21.026756749s Feb 3 11:06:20.801: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Running", Reason="", readiness=false. Elapsed: 23.035244454s Feb 3 11:06:22.832: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Running", Reason="", readiness=false. Elapsed: 25.065964749s Feb 3 11:06:24.859: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Running", Reason="", readiness=false. Elapsed: 27.093505822s Feb 3 11:06:26.910: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Running", Reason="", readiness=false. Elapsed: 29.144251581s Feb 3 11:06:28.923: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Running", Reason="", readiness=false. Elapsed: 31.157859025s Feb 3 11:06:30.944: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Running", Reason="", readiness=false. Elapsed: 33.178691989s Feb 3 11:06:32.961: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Running", Reason="", readiness=false. Elapsed: 35.195732759s Feb 3 11:06:35.045: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Running", Reason="", readiness=false. Elapsed: 37.279001337s Feb 3 11:06:37.594: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Running", Reason="", readiness=false. Elapsed: 39.828044679s Feb 3 11:06:39.612: INFO: Pod "pod-subpath-test-secret-nppc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.846702512s STEP: Saw pod success Feb 3 11:06:39.612: INFO: Pod "pod-subpath-test-secret-nppc" satisfied condition "success or failure" Feb 3 11:06:39.617: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-nppc container test-container-subpath-secret-nppc: STEP: delete the pod Feb 3 11:06:41.296: INFO: Waiting for pod pod-subpath-test-secret-nppc to disappear Feb 3 11:06:41.524: INFO: Pod pod-subpath-test-secret-nppc no longer exists STEP: Deleting pod pod-subpath-test-secret-nppc Feb 3 11:06:41.525: INFO: Deleting pod "pod-subpath-test-secret-nppc" in namespace "e2e-tests-subpath-hnwmr" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:06:41.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-hnwmr" for this suite. Feb 3 11:06:47.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:06:47.758: INFO: namespace: e2e-tests-subpath-hnwmr, resource: bindings, ignored listing per whitelist Feb 3 11:06:47.902: INFO: namespace e2e-tests-subpath-hnwmr deletion completed in 6.363356523s • [SLOW TEST:52.217 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:06:47.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 3 11:06:48.083: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:07:07.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-nt8vj" for this suite. Feb 3 11:07:13.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:07:13.634: INFO: namespace: e2e-tests-init-container-nt8vj, resource: bindings, ignored listing per whitelist Feb 3 11:07:13.805: INFO: namespace e2e-tests-init-container-nt8vj deletion completed in 6.271001883s • [SLOW TEST:25.903 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:07:13.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 3 11:07:24.282: INFO: Waiting up to 5m0s for pod "client-envvars-5b248046-4675-11ea-ab15-0242ac110005" in namespace "e2e-tests-pods-zr57r" to be "success or failure" Feb 3 11:07:24.387: INFO: Pod "client-envvars-5b248046-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 104.650633ms Feb 3 11:07:26.656: INFO: Pod "client-envvars-5b248046-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.373450681s Feb 3 11:07:28.669: INFO: Pod "client-envvars-5b248046-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.386600269s Feb 3 11:07:30.686: INFO: Pod "client-envvars-5b248046-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.403828546s Feb 3 11:07:32.905: INFO: Pod "client-envvars-5b248046-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622637734s Feb 3 11:07:35.027: INFO: Pod "client-envvars-5b248046-4675-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.744774514s STEP: Saw pod success Feb 3 11:07:35.027: INFO: Pod "client-envvars-5b248046-4675-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:07:35.066: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-5b248046-4675-11ea-ab15-0242ac110005 container env3cont: STEP: delete the pod Feb 3 11:07:35.303: INFO: Waiting for pod client-envvars-5b248046-4675-11ea-ab15-0242ac110005 to disappear Feb 3 11:07:35.316: INFO: Pod client-envvars-5b248046-4675-11ea-ab15-0242ac110005 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:07:35.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-zr57r" for this suite. Feb 3 11:08:19.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:08:19.531: INFO: namespace: e2e-tests-pods-zr57r, resource: bindings, ignored listing per whitelist Feb 3 11:08:19.574: INFO: namespace e2e-tests-pods-zr57r deletion completed in 44.248120474s • [SLOW TEST:65.769 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:08:19.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 3 11:08:19.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 3 11:08:19.935: INFO: stderr: "" Feb 3 11:08:19.935: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:08:19.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4fjbl" for this suite. Feb 3 11:08:26.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:08:26.895: INFO: namespace: e2e-tests-kubectl-4fjbl, resource: bindings, ignored listing per whitelist Feb 3 11:08:26.909: INFO: namespace e2e-tests-kubectl-4fjbl deletion completed in 6.958991944s • [SLOW TEST:7.334 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:08:26.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 3 11:08:27.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xwpfg' Feb 3 11:08:28.038: INFO: stderr: "" Feb 3 11:08:28.038: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 3 11:08:29.475: INFO: Selector matched 1 pods for map[app:redis] Feb 3 11:08:29.475: INFO: Found 0 / 1 Feb 3 11:08:30.193: INFO: Selector matched 1 pods for map[app:redis] Feb 3 11:08:30.193: INFO: Found 0 / 1 Feb 3 11:08:31.066: INFO: Selector matched 1 pods for map[app:redis] Feb 3 11:08:31.066: INFO: Found 0 / 1 Feb 3 11:08:32.059: INFO: Selector matched 1 pods for map[app:redis] Feb 3 11:08:32.059: INFO: Found 0 / 1 Feb 3 11:08:33.469: INFO: Selector matched 1 pods for map[app:redis] Feb 3 11:08:33.469: INFO: Found 0 / 1 Feb 3 11:08:34.190: INFO: Selector matched 1 pods for map[app:redis] Feb 3 11:08:34.190: INFO: Found 0 / 1 Feb 3 11:08:35.254: INFO: Selector matched 1 pods for map[app:redis] Feb 3 11:08:35.254: INFO: Found 0 / 1 Feb 3 11:08:36.056: INFO: Selector matched 1 pods for map[app:redis] Feb 3 11:08:36.056: INFO: Found 0 / 1 Feb 3 11:08:37.064: INFO: Selector matched 1 pods for map[app:redis] Feb 3 11:08:37.064: INFO: Found 1 / 1 Feb 3 11:08:37.064: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 3 11:08:37.077: INFO: Selector matched 1 pods for map[app:redis] Feb 3 11:08:37.077: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 3 11:08:37.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-cvscl --namespace=e2e-tests-kubectl-xwpfg -p {"metadata":{"annotations":{"x":"y"}}}' Feb 3 11:08:37.303: INFO: stderr: "" Feb 3 11:08:37.304: INFO: stdout: "pod/redis-master-cvscl patched\n" STEP: checking annotations Feb 3 11:08:37.311: INFO: Selector matched 1 pods for map[app:redis] Feb 3 11:08:37.311: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:08:37.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xwpfg" for this suite. Feb 3 11:09:01.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:09:01.502: INFO: namespace: e2e-tests-kubectl-xwpfg, resource: bindings, ignored listing per whitelist Feb 3 11:09:01.536: INFO: namespace e2e-tests-kubectl-xwpfg deletion completed in 24.218370107s • [SLOW TEST:34.627 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:09:01.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 3 11:09:01.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:02.230: INFO: stderr: "" Feb 3 11:09:02.230: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 11:09:02.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:02.599: INFO: stderr: "" Feb 3 11:09:02.599: INFO: stdout: "update-demo-nautilus-79zrn update-demo-nautilus-cqkfp " Feb 3 11:09:02.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79zrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:02.769: INFO: stderr: "" Feb 3 11:09:02.769: INFO: stdout: "" Feb 3 11:09:02.769: INFO: update-demo-nautilus-79zrn is created but not running Feb 3 11:09:07.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:07.949: INFO: stderr: "" Feb 3 11:09:07.949: INFO: stdout: "update-demo-nautilus-79zrn update-demo-nautilus-cqkfp " Feb 3 11:09:07.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79zrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:08.125: INFO: stderr: "" Feb 3 11:09:08.125: INFO: stdout: "" Feb 3 11:09:08.126: INFO: update-demo-nautilus-79zrn is created but not running Feb 3 11:09:13.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:13.336: INFO: stderr: "" Feb 3 11:09:13.336: INFO: stdout: "update-demo-nautilus-79zrn update-demo-nautilus-cqkfp " Feb 3 11:09:13.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79zrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:13.495: INFO: stderr: "" Feb 3 11:09:13.495: INFO: stdout: "" Feb 3 11:09:13.495: INFO: update-demo-nautilus-79zrn is created but not running Feb 3 11:09:18.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:18.658: INFO: stderr: "" Feb 3 11:09:18.658: INFO: stdout: "update-demo-nautilus-79zrn update-demo-nautilus-cqkfp " Feb 3 11:09:18.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79zrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:18.817: INFO: stderr: "" Feb 3 11:09:18.817: INFO: stdout: "true" Feb 3 11:09:18.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79zrn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:18.977: INFO: stderr: "" Feb 3 11:09:18.977: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 11:09:18.977: INFO: validating pod update-demo-nautilus-79zrn Feb 3 11:09:18.997: INFO: got data: { "image": "nautilus.jpg" } Feb 3 11:09:18.997: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 11:09:18.997: INFO: update-demo-nautilus-79zrn is verified up and running Feb 3 11:09:18.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqkfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:19.139: INFO: stderr: "" Feb 3 11:09:19.139: INFO: stdout: "true" Feb 3 11:09:19.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqkfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:19.261: INFO: stderr: "" Feb 3 11:09:19.261: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 11:09:19.261: INFO: validating pod update-demo-nautilus-cqkfp Feb 3 11:09:19.272: INFO: got data: { "image": "nautilus.jpg" } Feb 3 11:09:19.272: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 11:09:19.273: INFO: update-demo-nautilus-cqkfp is verified up and running STEP: using delete to clean up resources Feb 3 11:09:19.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:19.406: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 3 11:09:19.406: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 3 11:09:19.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-gvb2j' Feb 3 11:09:19.682: INFO: stderr: "No resources found.\n" Feb 3 11:09:19.683: INFO: stdout: "" Feb 3 11:09:19.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-gvb2j -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 3 11:09:19.832: INFO: stderr: "" Feb 3 11:09:19.832: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:09:19.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gvb2j" for this suite. Feb 3 11:09:44.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:09:44.300: INFO: namespace: e2e-tests-kubectl-gvb2j, resource: bindings, ignored listing per whitelist Feb 3 11:09:44.351: INFO: namespace e2e-tests-kubectl-gvb2j deletion completed in 24.503092344s • [SLOW TEST:42.814 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:09:44.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0203 11:10:15.247920 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 3 11:10:15.248: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:10:15.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2r464" for this suite. Feb 3 11:10:23.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:10:23.707: INFO: namespace: e2e-tests-gc-2r464, resource: bindings, ignored listing per whitelist Feb 3 11:10:23.730: INFO: namespace e2e-tests-gc-2r464 deletion completed in 8.470982522s • [SLOW TEST:39.378 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:10:23.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Feb 3 11:10:24.273: INFO: Waiting up to 5m0s for pod "var-expansion-c66bef5c-4675-11ea-ab15-0242ac110005" in namespace "e2e-tests-var-expansion-jcc28" to be "success or failure" Feb 3 11:10:24.552: INFO: Pod "var-expansion-c66bef5c-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 278.140599ms Feb 3 11:10:26.855: INFO: Pod "var-expansion-c66bef5c-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.581322932s Feb 3 11:10:28.871: INFO: Pod "var-expansion-c66bef5c-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.597560596s Feb 3 11:10:31.648: INFO: Pod "var-expansion-c66bef5c-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.374105132s Feb 3 11:10:33.660: INFO: Pod "var-expansion-c66bef5c-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.386807899s Feb 3 11:10:35.724: INFO: Pod "var-expansion-c66bef5c-4675-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.450861277s STEP: Saw pod success Feb 3 11:10:35.725: INFO: Pod "var-expansion-c66bef5c-4675-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:10:35.762: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-c66bef5c-4675-11ea-ab15-0242ac110005 container dapi-container: STEP: delete the pod Feb 3 11:10:35.932: INFO: Waiting for pod var-expansion-c66bef5c-4675-11ea-ab15-0242ac110005 to disappear Feb 3 11:10:35.942: INFO: Pod var-expansion-c66bef5c-4675-11ea-ab15-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:10:35.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-jcc28" for this suite. Feb 3 11:10:42.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:10:42.162: INFO: namespace: e2e-tests-var-expansion-jcc28, resource: bindings, ignored listing per whitelist Feb 3 11:10:42.211: INFO: namespace e2e-tests-var-expansion-jcc28 deletion completed in 6.24984682s • [SLOW TEST:18.480 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:10:42.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 3 11:10:42.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1389ee5-4675-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-zmktt" to be "success or failure" Feb 3 11:10:42.453: INFO: Pod "downwardapi-volume-d1389ee5-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.940901ms Feb 3 11:10:44.474: INFO: Pod "downwardapi-volume-d1389ee5-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049753562s Feb 3 11:10:46.500: INFO: Pod "downwardapi-volume-d1389ee5-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075500889s Feb 3 11:10:48.838: INFO: Pod "downwardapi-volume-d1389ee5-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413402919s Feb 3 11:10:50.873: INFO: Pod "downwardapi-volume-d1389ee5-4675-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.448604243s STEP: Saw pod success Feb 3 11:10:50.873: INFO: Pod "downwardapi-volume-d1389ee5-4675-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:10:50.888: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d1389ee5-4675-11ea-ab15-0242ac110005 container client-container: STEP: delete the pod Feb 3 11:10:51.013: INFO: Waiting for pod downwardapi-volume-d1389ee5-4675-11ea-ab15-0242ac110005 to disappear Feb 3 11:10:51.021: INFO: Pod downwardapi-volume-d1389ee5-4675-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:10:51.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zmktt" for this suite. Feb 3 11:10:57.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:10:57.269: INFO: namespace: e2e-tests-projected-zmktt, resource: bindings, ignored listing per whitelist Feb 3 11:10:57.502: INFO: namespace e2e-tests-projected-zmktt deletion completed in 6.472470079s • [SLOW TEST:15.291 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:10:57.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 3 11:10:57.739: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:11:08.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-l8w9j" for this suite. Feb 3 11:11:50.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:11:50.557: INFO: namespace: e2e-tests-pods-l8w9j, resource: bindings, ignored listing per whitelist Feb 3 11:11:50.604: INFO: namespace e2e-tests-pods-l8w9j deletion completed in 42.227653256s • [SLOW TEST:53.101 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:11:50.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-fa412ed3-4675-11ea-ab15-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 3 11:11:51.212: INFO: Waiting up to 5m0s for pod "pod-configmaps-fa4284a3-4675-11ea-ab15-0242ac110005" in namespace "e2e-tests-configmap-kvh4z" to be "success or failure" Feb 3 11:11:51.343: INFO: Pod "pod-configmaps-fa4284a3-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 131.005779ms Feb 3 11:11:53.592: INFO: Pod "pod-configmaps-fa4284a3-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379203243s Feb 3 11:11:55.611: INFO: Pod "pod-configmaps-fa4284a3-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.398698111s Feb 3 11:11:57.795: INFO: Pod "pod-configmaps-fa4284a3-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.582427888s Feb 3 11:12:00.026: INFO: Pod "pod-configmaps-fa4284a3-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.81340049s Feb 3 11:12:02.044: INFO: Pod "pod-configmaps-fa4284a3-4675-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.832031816s Feb 3 11:12:04.058: INFO: Pod "pod-configmaps-fa4284a3-4675-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.845436012s STEP: Saw pod success Feb 3 11:12:04.058: INFO: Pod "pod-configmaps-fa4284a3-4675-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:12:04.064: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-fa4284a3-4675-11ea-ab15-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 3 11:12:04.824: INFO: Waiting for pod pod-configmaps-fa4284a3-4675-11ea-ab15-0242ac110005 to disappear Feb 3 11:12:04.839: INFO: Pod pod-configmaps-fa4284a3-4675-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:12:04.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kvh4z" for this suite. Feb 3 11:12:10.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:12:11.091: INFO: namespace: e2e-tests-configmap-kvh4z, resource: bindings, ignored listing per whitelist Feb 3 11:12:11.159: INFO: namespace e2e-tests-configmap-kvh4z deletion completed in 6.308041109s • [SLOW TEST:20.555 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:12:11.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 3 11:12:11.420: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0649e21f-4676-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-85fk6" to be "success or failure" Feb 3 11:12:11.585: INFO: Pod "downwardapi-volume-0649e21f-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 165.223723ms Feb 3 11:12:13.617: INFO: Pod "downwardapi-volume-0649e21f-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197422722s Feb 3 11:12:15.630: INFO: Pod "downwardapi-volume-0649e21f-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210305083s Feb 3 11:12:17.801: INFO: Pod "downwardapi-volume-0649e21f-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.381103099s Feb 3 11:12:19.815: INFO: Pod "downwardapi-volume-0649e21f-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394919943s Feb 3 11:12:21.835: INFO: Pod "downwardapi-volume-0649e21f-4676-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.415153507s STEP: Saw pod success Feb 3 11:12:21.835: INFO: Pod "downwardapi-volume-0649e21f-4676-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:12:21.841: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0649e21f-4676-11ea-ab15-0242ac110005 container client-container: STEP: delete the pod Feb 3 11:12:22.423: INFO: Waiting for pod downwardapi-volume-0649e21f-4676-11ea-ab15-0242ac110005 to disappear Feb 3 11:12:22.730: INFO: Pod downwardapi-volume-0649e21f-4676-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:12:22.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-85fk6" for this suite. Feb 3 11:12:28.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:12:28.838: INFO: namespace: e2e-tests-downward-api-85fk6, resource: bindings, ignored listing per whitelist Feb 3 11:12:28.938: INFO: namespace e2e-tests-downward-api-85fk6 deletion completed in 6.19037761s • [SLOW TEST:17.779 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:12:28.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 3 11:12:29.156: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10de6f47-4676-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-xqp8b" to be "success or failure" Feb 3 11:12:29.207: INFO: Pod "downwardapi-volume-10de6f47-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.807334ms Feb 3 11:12:31.470: INFO: Pod "downwardapi-volume-10de6f47-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314180946s Feb 3 11:12:33.488: INFO: Pod "downwardapi-volume-10de6f47-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331473208s Feb 3 11:12:35.502: INFO: Pod "downwardapi-volume-10de6f47-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.345523064s Feb 3 11:12:37.512: INFO: Pod "downwardapi-volume-10de6f47-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.356263322s Feb 3 11:12:39.533: INFO: Pod "downwardapi-volume-10de6f47-4676-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.376663156s STEP: Saw pod success Feb 3 11:12:39.533: INFO: Pod "downwardapi-volume-10de6f47-4676-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:12:39.540: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-10de6f47-4676-11ea-ab15-0242ac110005 container client-container: STEP: delete the pod Feb 3 11:12:39.873: INFO: Waiting for pod downwardapi-volume-10de6f47-4676-11ea-ab15-0242ac110005 to disappear Feb 3 11:12:39.910: INFO: Pod downwardapi-volume-10de6f47-4676-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:12:39.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xqp8b" for this suite. Feb 3 11:12:46.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:12:46.167: INFO: namespace: e2e-tests-downward-api-xqp8b, resource: bindings, ignored listing per whitelist Feb 3 11:12:46.348: INFO: namespace e2e-tests-downward-api-xqp8b deletion completed in 6.424851939s • [SLOW TEST:17.409 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:12:46.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 3 11:12:57.290: INFO: Successfully updated pod "labelsupdate1b429358-4676-11ea-ab15-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:12:59.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wsftb" for this suite. Feb 3 11:13:23.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:13:23.659: INFO: namespace: e2e-tests-downward-api-wsftb, resource: bindings, ignored listing per whitelist Feb 3 11:13:23.729: INFO: namespace e2e-tests-downward-api-wsftb deletion completed in 24.277845397s • [SLOW TEST:37.380 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:13:23.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-31a3e35b-4676-11ea-ab15-0242ac110005 STEP: Creating a pod to test consume secrets Feb 3 11:13:24.185: INFO: Waiting up to 5m0s for pod "pod-secrets-31a799ae-4676-11ea-ab15-0242ac110005" in namespace "e2e-tests-secrets-zf2mn" to be "success or failure" Feb 3 11:13:24.324: INFO: Pod "pod-secrets-31a799ae-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 139.611273ms Feb 3 11:13:26.713: INFO: Pod "pod-secrets-31a799ae-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.527935852s Feb 3 11:13:28.727: INFO: Pod "pod-secrets-31a799ae-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.541897586s Feb 3 11:13:30.742: INFO: Pod "pod-secrets-31a799ae-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.557729201s Feb 3 11:13:32.755: INFO: Pod "pod-secrets-31a799ae-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.570178439s Feb 3 11:13:34.818: INFO: Pod "pod-secrets-31a799ae-4676-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.633011039s STEP: Saw pod success Feb 3 11:13:34.818: INFO: Pod "pod-secrets-31a799ae-4676-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:13:34.829: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-31a799ae-4676-11ea-ab15-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 3 11:13:35.015: INFO: Waiting for pod pod-secrets-31a799ae-4676-11ea-ab15-0242ac110005 to disappear Feb 3 11:13:35.022: INFO: Pod pod-secrets-31a799ae-4676-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:13:35.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-zf2mn" for this suite. Feb 3 11:13:41.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:13:41.278: INFO: namespace: e2e-tests-secrets-zf2mn, resource: bindings, ignored listing per whitelist Feb 3 11:13:41.302: INFO: namespace e2e-tests-secrets-zf2mn deletion completed in 6.267499034s • [SLOW TEST:17.573 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:13:41.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-3c21573e-4676-11ea-ab15-0242ac110005 STEP: Creating a pod to test consume secrets Feb 3 11:13:41.761: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3c25b83c-4676-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-652hv" to be "success or failure" Feb 3 11:13:41.793: INFO: Pod "pod-projected-secrets-3c25b83c-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.126626ms Feb 3 11:13:44.057: INFO: Pod "pod-projected-secrets-3c25b83c-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296568169s Feb 3 11:13:46.082: INFO: Pod "pod-projected-secrets-3c25b83c-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320887942s Feb 3 11:13:48.148: INFO: Pod "pod-projected-secrets-3c25b83c-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.387396185s Feb 3 11:13:50.986: INFO: Pod "pod-projected-secrets-3c25b83c-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.224905751s Feb 3 11:13:53.017: INFO: Pod "pod-projected-secrets-3c25b83c-4676-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.256208993s STEP: Saw pod success Feb 3 11:13:53.017: INFO: Pod "pod-projected-secrets-3c25b83c-4676-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:13:53.034: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-3c25b83c-4676-11ea-ab15-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 3 11:13:53.540: INFO: Waiting for pod pod-projected-secrets-3c25b83c-4676-11ea-ab15-0242ac110005 to disappear Feb 3 11:13:53.565: INFO: Pod pod-projected-secrets-3c25b83c-4676-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:13:53.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-652hv" for this suite. Feb 3 11:13:59.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:13:59.788: INFO: namespace: e2e-tests-projected-652hv, resource: bindings, ignored listing per whitelist Feb 3 11:13:59.805: INFO: namespace e2e-tests-projected-652hv deletion completed in 6.228840401s • [SLOW TEST:18.502 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:13:59.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-470dfa77-4676-11ea-ab15-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 3 11:14:00.085: INFO: Waiting up to 5m0s for pod "pod-configmaps-470f173c-4676-11ea-ab15-0242ac110005" in namespace "e2e-tests-configmap-x4pkc" to be "success or failure" Feb 3 11:14:00.113: INFO: Pod "pod-configmaps-470f173c-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.012826ms Feb 3 11:14:02.502: INFO: Pod "pod-configmaps-470f173c-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.416880696s Feb 3 11:14:04.523: INFO: Pod "pod-configmaps-470f173c-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437430436s Feb 3 11:14:06.556: INFO: Pod "pod-configmaps-470f173c-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470488061s Feb 3 11:14:08.596: INFO: Pod "pod-configmaps-470f173c-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.510265695s Feb 3 11:14:10.626: INFO: Pod "pod-configmaps-470f173c-4676-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.540544509s STEP: Saw pod success Feb 3 11:14:10.626: INFO: Pod "pod-configmaps-470f173c-4676-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:14:10.632: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-470f173c-4676-11ea-ab15-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 3 11:14:10.815: INFO: Waiting for pod pod-configmaps-470f173c-4676-11ea-ab15-0242ac110005 to disappear Feb 3 11:14:10.844: INFO: Pod pod-configmaps-470f173c-4676-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:14:10.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-x4pkc" for this suite. Feb 3 11:14:17.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:14:17.304: INFO: namespace: e2e-tests-configmap-x4pkc, resource: bindings, ignored listing per whitelist Feb 3 11:14:17.393: INFO: namespace e2e-tests-configmap-x4pkc deletion completed in 6.526385644s • [SLOW TEST:17.588 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:14:17.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 3 11:14:17.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:19.897: INFO: stderr: "" Feb 3 11:14:19.898: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 11:14:19.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:20.041: INFO: stderr: "" Feb 3 11:14:20.041: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Feb 3 11:14:25.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:25.234: INFO: stderr: "" Feb 3 11:14:25.234: INFO: stdout: "update-demo-nautilus-f9hjq update-demo-nautilus-mblz9 " Feb 3 11:14:25.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9hjq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:25.381: INFO: stderr: "" Feb 3 11:14:25.382: INFO: stdout: "" Feb 3 11:14:25.382: INFO: update-demo-nautilus-f9hjq is created but not running Feb 3 11:14:30.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:30.571: INFO: stderr: "" Feb 3 11:14:30.571: INFO: stdout: "update-demo-nautilus-f9hjq update-demo-nautilus-mblz9 " Feb 3 11:14:30.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9hjq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:30.733: INFO: stderr: "" Feb 3 11:14:30.733: INFO: stdout: "" Feb 3 11:14:30.733: INFO: update-demo-nautilus-f9hjq is created but not running Feb 3 11:14:35.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:35.932: INFO: stderr: "" Feb 3 11:14:35.932: INFO: stdout: "update-demo-nautilus-f9hjq update-demo-nautilus-mblz9 " Feb 3 11:14:35.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9hjq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:36.053: INFO: stderr: "" Feb 3 11:14:36.053: INFO: stdout: "true" Feb 3 11:14:36.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9hjq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:36.160: INFO: stderr: "" Feb 3 11:14:36.160: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 11:14:36.160: INFO: validating pod update-demo-nautilus-f9hjq Feb 3 11:14:36.172: INFO: got data: { "image": "nautilus.jpg" } Feb 3 11:14:36.172: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 11:14:36.172: INFO: update-demo-nautilus-f9hjq is verified up and running Feb 3 11:14:36.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mblz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:36.291: INFO: stderr: "" Feb 3 11:14:36.291: INFO: stdout: "true" Feb 3 11:14:36.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mblz9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:36.401: INFO: stderr: "" Feb 3 11:14:36.401: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 11:14:36.401: INFO: validating pod update-demo-nautilus-mblz9 Feb 3 11:14:36.412: INFO: got data: { "image": "nautilus.jpg" } Feb 3 11:14:36.412: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 11:14:36.412: INFO: update-demo-nautilus-mblz9 is verified up and running STEP: scaling down the replication controller Feb 3 11:14:36.415: INFO: scanned /root for discovery docs: Feb 3 11:14:36.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:37.904: INFO: stderr: "" Feb 3 11:14:37.904: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 11:14:37.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:38.210: INFO: stderr: "" Feb 3 11:14:38.210: INFO: stdout: "update-demo-nautilus-f9hjq update-demo-nautilus-mblz9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 3 11:14:43.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:43.397: INFO: stderr: "" Feb 3 11:14:43.397: INFO: stdout: "update-demo-nautilus-f9hjq update-demo-nautilus-mblz9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 3 11:14:48.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:48.689: INFO: stderr: "" Feb 3 11:14:48.689: INFO: stdout: "update-demo-nautilus-f9hjq update-demo-nautilus-mblz9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 3 11:14:53.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:53.951: INFO: stderr: "" Feb 3 11:14:53.952: INFO: stdout: "update-demo-nautilus-f9hjq " Feb 3 11:14:53.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9hjq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:54.223: INFO: stderr: "" Feb 3 11:14:54.223: INFO: stdout: "true" Feb 3 11:14:54.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9hjq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:54.386: INFO: stderr: "" Feb 3 11:14:54.386: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 11:14:54.386: INFO: validating pod update-demo-nautilus-f9hjq Feb 3 11:14:54.408: INFO: got data: { "image": "nautilus.jpg" } Feb 3 11:14:54.408: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 11:14:54.409: INFO: update-demo-nautilus-f9hjq is verified up and running STEP: scaling up the replication controller Feb 3 11:14:54.412: INFO: scanned /root for discovery docs: Feb 3 11:14:54.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:55.773: INFO: stderr: "" Feb 3 11:14:55.774: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 11:14:55.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:55.938: INFO: stderr: "" Feb 3 11:14:55.938: INFO: stdout: "update-demo-nautilus-5s24j update-demo-nautilus-f9hjq " Feb 3 11:14:55.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5s24j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:14:56.372: INFO: stderr: "" Feb 3 11:14:56.372: INFO: stdout: "" Feb 3 11:14:56.372: INFO: update-demo-nautilus-5s24j is created but not running Feb 3 11:15:01.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:15:01.533: INFO: stderr: "" Feb 3 11:15:01.533: INFO: stdout: "update-demo-nautilus-5s24j update-demo-nautilus-f9hjq " Feb 3 11:15:01.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5s24j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:15:01.648: INFO: stderr: "" Feb 3 11:15:01.648: INFO: stdout: "" Feb 3 11:15:01.648: INFO: update-demo-nautilus-5s24j is created but not running Feb 3 11:15:06.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:15:06.812: INFO: stderr: "" Feb 3 11:15:06.812: INFO: stdout: "update-demo-nautilus-5s24j update-demo-nautilus-f9hjq " Feb 3 11:15:06.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5s24j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:15:06.931: INFO: stderr: "" Feb 3 11:15:06.931: INFO: stdout: "true" Feb 3 11:15:06.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5s24j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:15:07.085: INFO: stderr: "" Feb 3 11:15:07.085: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 11:15:07.086: INFO: validating pod update-demo-nautilus-5s24j Feb 3 11:15:07.096: INFO: got data: { "image": "nautilus.jpg" } Feb 3 11:15:07.096: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 11:15:07.096: INFO: update-demo-nautilus-5s24j is verified up and running Feb 3 11:15:07.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9hjq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:15:07.230: INFO: stderr: "" Feb 3 11:15:07.231: INFO: stdout: "true" Feb 3 11:15:07.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9hjq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:15:07.330: INFO: stderr: "" Feb 3 11:15:07.330: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 11:15:07.330: INFO: validating pod update-demo-nautilus-f9hjq Feb 3 11:15:07.340: INFO: got data: { "image": "nautilus.jpg" } Feb 3 11:15:07.341: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 11:15:07.341: INFO: update-demo-nautilus-f9hjq is verified up and running STEP: using delete to clean up resources Feb 3 11:15:07.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:15:07.540: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 3 11:15:07.540: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 3 11:15:07.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-2jsfp' Feb 3 11:15:07.741: INFO: stderr: "No resources found.\n" Feb 3 11:15:07.742: INFO: stdout: "" Feb 3 11:15:07.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-2jsfp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 3 11:15:07.955: INFO: stderr: "" Feb 3 11:15:07.955: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:15:07.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2jsfp" for this suite. Feb 3 11:15:32.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:15:32.137: INFO: namespace: e2e-tests-kubectl-2jsfp, resource: bindings, ignored listing per whitelist Feb 3 11:15:32.231: INFO: namespace e2e-tests-kubectl-2jsfp deletion completed in 24.248373916s • [SLOW TEST:74.837 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:15:32.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 3 11:15:32.352: INFO: Creating ReplicaSet my-hostname-basic-7e13afda-4676-11ea-ab15-0242ac110005 Feb 3 11:15:32.478: INFO: Pod name my-hostname-basic-7e13afda-4676-11ea-ab15-0242ac110005: Found 0 pods out of 1 Feb 3 11:15:37.496: INFO: Pod name my-hostname-basic-7e13afda-4676-11ea-ab15-0242ac110005: Found 1 pods out of 1 Feb 3 11:15:37.496: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7e13afda-4676-11ea-ab15-0242ac110005" is running Feb 3 11:15:41.516: INFO: Pod "my-hostname-basic-7e13afda-4676-11ea-ab15-0242ac110005-6ttfw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 11:15:32 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 11:15:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7e13afda-4676-11ea-ab15-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 11:15:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7e13afda-4676-11ea-ab15-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 11:15:32 +0000 UTC Reason: Message:}]) Feb 3 11:15:41.517: INFO: Trying to dial the pod Feb 3 11:15:47.053: INFO: Controller my-hostname-basic-7e13afda-4676-11ea-ab15-0242ac110005: Got expected result from replica 1 [my-hostname-basic-7e13afda-4676-11ea-ab15-0242ac110005-6ttfw]: "my-hostname-basic-7e13afda-4676-11ea-ab15-0242ac110005-6ttfw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:15:47.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-p9cln" for this suite. Feb 3 11:15:53.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:15:53.677: INFO: namespace: e2e-tests-replicaset-p9cln, resource: bindings, ignored listing per whitelist Feb 3 11:15:53.802: INFO: namespace e2e-tests-replicaset-p9cln deletion completed in 6.725072749s • [SLOW TEST:21.571 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:15:53.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-7k9gh in namespace e2e-tests-proxy-wqdk6 I0203 11:15:54.251897 8 runners.go:184] Created replication controller with name: proxy-service-7k9gh, namespace: e2e-tests-proxy-wqdk6, replica count: 1 I0203 11:15:55.302968 8 runners.go:184] proxy-service-7k9gh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 11:15:56.303407 8 runners.go:184] proxy-service-7k9gh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 11:15:57.304097 8 runners.go:184] proxy-service-7k9gh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 11:15:58.305015 8 runners.go:184] proxy-service-7k9gh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 11:15:59.305726 8 runners.go:184] proxy-service-7k9gh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 11:16:00.306351 8 runners.go:184] proxy-service-7k9gh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 11:16:01.306941 8 runners.go:184] proxy-service-7k9gh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 11:16:02.307502 8 runners.go:184] proxy-service-7k9gh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 11:16:03.308065 8 runners.go:184] proxy-service-7k9gh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 11:16:04.309048 8 runners.go:184] proxy-service-7k9gh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 11:16:05.309696 8 runners.go:184] proxy-service-7k9gh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 11:16:06.310987 8 runners.go:184] proxy-service-7k9gh Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 3 11:16:06.331: INFO: setup took 12.262824173s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 3 11:16:06.372: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wqdk6/pods/proxy-service-7k9gh-snqr4:162/proxy/: bar (200; 40.27779ms) Feb 3 11:16:06.373: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wqdk6/pods/http:proxy-service-7k9gh-snqr4:162/proxy/: bar (200; 41.782407ms) Feb 3 11:16:06.374: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wqdk6/services/proxy-service-7k9gh:portname2/proxy/: bar (200; 42.518674ms) Feb 3 11:16:06.374: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wqdk6/services/http:proxy-service-7k9gh:portname1/proxy/: foo (200; 42.721428ms) Feb 3 11:16:06.376: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wqdk6/pods/http:proxy-service-7k9gh-snqr4:160/proxy/: foo (200; 44.217409ms) Feb 3 11:16:06.376: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wqdk6/pods/proxy-service-7k9gh-snqr4:160/proxy/: foo (200; 44.860213ms) Feb 3 11:16:06.387: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wqdk6/services/http:proxy-service-7k9gh:portname2/proxy/: bar (200; 54.854814ms) Feb 3 11:16:06.387: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wqdk6/pods/proxy-service-7k9gh-snqr4:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 3 11:16:19.721: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 3 11:16:19.862: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 3 11:16:25.628: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 3 11:16:29.646: INFO: Creating deployment "test-rolling-update-deployment" Feb 3 11:16:29.667: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 3 11:16:29.695: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 3 11:16:31.716: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 3 11:16:31.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325390, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325390, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325390, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325389, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 11:16:33.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325390, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325390, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325390, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325389, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 11:16:35.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325390, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325390, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325390, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325389, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 11:16:37.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325390, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325390, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325390, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716325389, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 11:16:39.742: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 3 11:16:39.768: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-nnv7q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nnv7q/deployments/test-rolling-update-deployment,UID:a03a8ab4-4676-11ea-a994-fa163e34d433,ResourceVersion:20408146,Generation:1,CreationTimestamp:2020-02-03 11:16:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-03 11:16:30 +0000 UTC 2020-02-03 11:16:30 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-03 11:16:38 +0000 UTC 2020-02-03 11:16:29 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 3 11:16:39.773: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-nnv7q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nnv7q/replicasets/test-rolling-update-deployment-75db98fb4c,UID:a04ea026-4676-11ea-a994-fa163e34d433,ResourceVersion:20408136,Generation:1,CreationTimestamp:2020-02-03 11:16:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a03a8ab4-4676-11ea-a994-fa163e34d433 0xc001feee97 0xc001feee98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 3 11:16:39.773: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 3 11:16:39.774: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-nnv7q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nnv7q/replicasets/test-rolling-update-controller,UID:9a4ff38d-4676-11ea-a994-fa163e34d433,ResourceVersion:20408144,Generation:2,CreationTimestamp:2020-02-03 11:16:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a03a8ab4-4676-11ea-a994-fa163e34d433 0xc001feedbf 0xc001feedd0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 3 11:16:39.782: INFO: Pod "test-rolling-update-deployment-75db98fb4c-26gg2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-26gg2,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-nnv7q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nnv7q/pods/test-rolling-update-deployment-75db98fb4c-26gg2,UID:a052ad2c-4676-11ea-a994-fa163e34d433,ResourceVersion:20408135,Generation:0,CreationTimestamp:2020-02-03 11:16:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c a04ea026-4676-11ea-a994-fa163e34d433 0xc001fef767 0xc001fef768}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-f6xcg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f6xcg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-f6xcg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fef7d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fef7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:16:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:16:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:16:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:16:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-03 11:16:30 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-03 11:16:37 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://4092df385b7f429082b44253821d1ba9465b97e810a2561da0660fe8148845bf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:16:39.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-nnv7q" for this suite. Feb 3 11:16:47.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:16:48.062: INFO: namespace: e2e-tests-deployment-nnv7q, resource: bindings, ignored listing per whitelist Feb 3 11:16:48.172: INFO: namespace e2e-tests-deployment-nnv7q deletion completed in 8.375454866s • [SLOW TEST:28.710 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:16:48.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Feb 3 11:16:49.963: INFO: created pod pod-service-account-defaultsa Feb 3 11:16:49.963: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 3 11:16:49.983: INFO: created pod pod-service-account-mountsa Feb 3 11:16:49.983: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 3 11:16:49.991: INFO: created pod pod-service-account-nomountsa Feb 3 11:16:49.991: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 3 11:16:50.118: INFO: created pod pod-service-account-defaultsa-mountspec Feb 3 11:16:50.118: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 3 11:16:50.208: INFO: created pod pod-service-account-mountsa-mountspec Feb 3 11:16:50.209: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 3 11:16:50.417: INFO: created pod pod-service-account-nomountsa-mountspec Feb 3 11:16:50.417: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 3 11:16:50.501: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 3 11:16:50.502: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 3 11:16:50.610: INFO: created pod pod-service-account-mountsa-nomountspec Feb 3 11:16:50.610: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 3 11:16:50.655: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 3 11:16:50.655: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:16:50.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-8s9fv" for this suite. Feb 3 11:17:20.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:17:20.840: INFO: namespace: e2e-tests-svcaccounts-8s9fv, resource: bindings, ignored listing per whitelist Feb 3 11:17:20.951: INFO: namespace e2e-tests-svcaccounts-8s9fv deletion completed in 30.272554184s • [SLOW TEST:32.779 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:17:20.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-xk4g STEP: Creating a pod to test atomic-volume-subpath Feb 3 11:17:21.505: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-xk4g" in namespace "e2e-tests-subpath-c29lv" to be "success or failure" Feb 3 11:17:21.633: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Pending", Reason="", readiness=false. Elapsed: 128.367743ms Feb 3 11:17:24.095: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.589952356s Feb 3 11:17:26.119: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.613803638s Feb 3 11:17:28.185: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.679948818s Feb 3 11:17:30.218: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.713076524s Feb 3 11:17:32.246: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.740911698s Feb 3 11:17:34.269: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Pending", Reason="", readiness=false. Elapsed: 12.76405667s Feb 3 11:17:36.476: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Pending", Reason="", readiness=false. Elapsed: 14.970648076s Feb 3 11:17:38.542: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Running", Reason="", readiness=false. Elapsed: 17.036828996s Feb 3 11:17:40.576: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Running", Reason="", readiness=false. Elapsed: 19.070659751s Feb 3 11:17:42.603: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Running", Reason="", readiness=false. Elapsed: 21.098482367s Feb 3 11:17:44.633: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Running", Reason="", readiness=false. Elapsed: 23.128590121s Feb 3 11:17:46.649: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Running", Reason="", readiness=false. Elapsed: 25.143770429s Feb 3 11:17:48.663: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Running", Reason="", readiness=false. Elapsed: 27.157669225s Feb 3 11:17:50.673: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Running", Reason="", readiness=false. Elapsed: 29.168127841s Feb 3 11:17:52.705: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Running", Reason="", readiness=false. Elapsed: 31.199782035s Feb 3 11:17:54.726: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Running", Reason="", readiness=false. Elapsed: 33.221609089s Feb 3 11:17:56.740: INFO: Pod "pod-subpath-test-downwardapi-xk4g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.23489048s STEP: Saw pod success Feb 3 11:17:56.740: INFO: Pod "pod-subpath-test-downwardapi-xk4g" satisfied condition "success or failure" Feb 3 11:17:56.765: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-xk4g container test-container-subpath-downwardapi-xk4g: STEP: delete the pod Feb 3 11:17:57.680: INFO: Waiting for pod pod-subpath-test-downwardapi-xk4g to disappear Feb 3 11:17:57.697: INFO: Pod pod-subpath-test-downwardapi-xk4g no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-xk4g Feb 3 11:17:57.697: INFO: Deleting pod "pod-subpath-test-downwardapi-xk4g" in namespace "e2e-tests-subpath-c29lv" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:17:57.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-c29lv" for this suite. Feb 3 11:18:03.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:18:04.068: INFO: namespace: e2e-tests-subpath-c29lv, resource: bindings, ignored listing per whitelist Feb 3 11:18:04.075: INFO: namespace e2e-tests-subpath-c29lv deletion completed in 6.351658241s • [SLOW TEST:43.123 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:18:04.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-d8a315fc-4676-11ea-ab15-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 3 11:18:04.319: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8a4b57d-4676-11ea-ab15-0242ac110005" in namespace "e2e-tests-configmap-7m58c" to be "success or failure" Feb 3 11:18:04.348: INFO: Pod "pod-configmaps-d8a4b57d-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.715409ms Feb 3 11:18:06.396: INFO: Pod "pod-configmaps-d8a4b57d-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076843005s Feb 3 11:18:08.422: INFO: Pod "pod-configmaps-d8a4b57d-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103581181s Feb 3 11:18:10.636: INFO: Pod "pod-configmaps-d8a4b57d-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.316733255s Feb 3 11:18:12.760: INFO: Pod "pod-configmaps-d8a4b57d-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.441472972s Feb 3 11:18:14.782: INFO: Pod "pod-configmaps-d8a4b57d-4676-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.463538002s STEP: Saw pod success Feb 3 11:18:14.783: INFO: Pod "pod-configmaps-d8a4b57d-4676-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:18:14.790: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d8a4b57d-4676-11ea-ab15-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 3 11:18:15.086: INFO: Waiting for pod pod-configmaps-d8a4b57d-4676-11ea-ab15-0242ac110005 to disappear Feb 3 11:18:15.108: INFO: Pod pod-configmaps-d8a4b57d-4676-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:18:15.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7m58c" for this suite. Feb 3 11:18:21.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:18:21.240: INFO: namespace: e2e-tests-configmap-7m58c, resource: bindings, ignored listing per whitelist Feb 3 11:18:21.337: INFO: namespace e2e-tests-configmap-7m58c deletion completed in 6.210385218s • [SLOW TEST:17.262 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:18:21.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-e2fb57b3-4676-11ea-ab15-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 3 11:18:21.725: INFO: Waiting up to 5m0s for pod "pod-configmaps-e303feb7-4676-11ea-ab15-0242ac110005" in namespace "e2e-tests-configmap-crtmm" to be "success or failure" Feb 3 11:18:21.758: INFO: Pod "pod-configmaps-e303feb7-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.214898ms Feb 3 11:18:23.780: INFO: Pod "pod-configmaps-e303feb7-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054887001s Feb 3 11:18:25.823: INFO: Pod "pod-configmaps-e303feb7-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097906839s Feb 3 11:18:28.222: INFO: Pod "pod-configmaps-e303feb7-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.49636995s Feb 3 11:18:30.239: INFO: Pod "pod-configmaps-e303feb7-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.513853858s Feb 3 11:18:32.253: INFO: Pod "pod-configmaps-e303feb7-4676-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.527291689s STEP: Saw pod success Feb 3 11:18:32.253: INFO: Pod "pod-configmaps-e303feb7-4676-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:18:32.258: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e303feb7-4676-11ea-ab15-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 3 11:18:32.589: INFO: Waiting for pod pod-configmaps-e303feb7-4676-11ea-ab15-0242ac110005 to disappear Feb 3 11:18:32.721: INFO: Pod pod-configmaps-e303feb7-4676-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:18:32.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-crtmm" for this suite. Feb 3 11:18:38.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:18:38.942: INFO: namespace: e2e-tests-configmap-crtmm, resource: bindings, ignored listing per whitelist Feb 3 11:18:39.043: INFO: namespace e2e-tests-configmap-crtmm deletion completed in 6.310155373s • [SLOW TEST:17.705 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:18:39.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-ed81ddcd-4676-11ea-ab15-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 3 11:18:39.324: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ed83676e-4676-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-g4nkp" to be "success or failure" Feb 3 11:18:39.364: INFO: Pod "pod-projected-configmaps-ed83676e-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.406353ms Feb 3 11:18:41.425: INFO: Pod "pod-projected-configmaps-ed83676e-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100500131s Feb 3 11:18:43.445: INFO: Pod "pod-projected-configmaps-ed83676e-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120587569s Feb 3 11:18:45.545: INFO: Pod "pod-projected-configmaps-ed83676e-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221024735s Feb 3 11:18:47.939: INFO: Pod "pod-projected-configmaps-ed83676e-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.614873277s Feb 3 11:18:49.994: INFO: Pod "pod-projected-configmaps-ed83676e-4676-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.669599706s STEP: Saw pod success Feb 3 11:18:49.994: INFO: Pod "pod-projected-configmaps-ed83676e-4676-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:18:50.001: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ed83676e-4676-11ea-ab15-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 3 11:18:50.090: INFO: Waiting for pod pod-projected-configmaps-ed83676e-4676-11ea-ab15-0242ac110005 to disappear Feb 3 11:18:50.153: INFO: Pod pod-projected-configmaps-ed83676e-4676-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:18:50.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-g4nkp" for this suite. Feb 3 11:18:56.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:18:56.539: INFO: namespace: e2e-tests-projected-g4nkp, resource: bindings, ignored listing per whitelist Feb 3 11:18:56.624: INFO: namespace e2e-tests-projected-g4nkp deletion completed in 6.448296278s • [SLOW TEST:17.580 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:18:56.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 3 11:18:56.842: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7f4af4d-4676-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-zd9bq" to be "success or failure" Feb 3 11:18:56.924: INFO: Pod "downwardapi-volume-f7f4af4d-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 81.807846ms Feb 3 11:18:58.938: INFO: Pod "downwardapi-volume-f7f4af4d-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096032961s Feb 3 11:19:00.955: INFO: Pod "downwardapi-volume-f7f4af4d-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112736503s Feb 3 11:19:03.170: INFO: Pod "downwardapi-volume-f7f4af4d-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.327898303s Feb 3 11:19:05.222: INFO: Pod "downwardapi-volume-f7f4af4d-4676-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.379823919s Feb 3 11:19:07.235: INFO: Pod "downwardapi-volume-f7f4af4d-4676-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.392537287s STEP: Saw pod success Feb 3 11:19:07.235: INFO: Pod "downwardapi-volume-f7f4af4d-4676-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:19:07.243: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f7f4af4d-4676-11ea-ab15-0242ac110005 container client-container: STEP: delete the pod Feb 3 11:19:07.406: INFO: Waiting for pod downwardapi-volume-f7f4af4d-4676-11ea-ab15-0242ac110005 to disappear Feb 3 11:19:08.224: INFO: Pod downwardapi-volume-f7f4af4d-4676-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:19:08.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zd9bq" for this suite. Feb 3 11:19:14.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:19:14.687: INFO: namespace: e2e-tests-projected-zd9bq, resource: bindings, ignored listing per whitelist Feb 3 11:19:14.741: INFO: namespace e2e-tests-projected-zd9bq deletion completed in 6.213406911s • [SLOW TEST:18.117 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:19:14.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 3 11:19:39.485: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cq4jf PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 11:19:39.485: INFO: >>> kubeConfig: /root/.kube/config I0203 11:19:39.569640 8 log.go:172] (0xc001604630) (0xc0014cab40) Create stream I0203 11:19:39.569778 8 log.go:172] (0xc001604630) (0xc0014cab40) Stream added, broadcasting: 1 I0203 11:19:39.575771 8 log.go:172] (0xc001604630) Reply frame received for 1 I0203 11:19:39.575813 8 log.go:172] (0xc001604630) (0xc0021d2320) Create stream I0203 11:19:39.575862 8 log.go:172] (0xc001604630) (0xc0021d2320) Stream added, broadcasting: 3 I0203 11:19:39.577310 8 log.go:172] (0xc001604630) Reply frame received for 3 I0203 11:19:39.577354 8 log.go:172] (0xc001604630) (0xc0014cabe0) Create stream I0203 11:19:39.577369 8 log.go:172] (0xc001604630) (0xc0014cabe0) Stream added, broadcasting: 5 I0203 11:19:39.579688 8 log.go:172] (0xc001604630) Reply frame received for 5 I0203 11:19:39.704683 8 log.go:172] (0xc001604630) Data frame received for 3 I0203 11:19:39.704801 8 log.go:172] (0xc0021d2320) (3) Data frame handling I0203 11:19:39.704896 8 log.go:172] (0xc0021d2320) (3) Data frame sent I0203 11:19:39.849108 8 log.go:172] (0xc001604630) (0xc0021d2320) Stream removed, broadcasting: 3 I0203 11:19:39.849231 8 log.go:172] (0xc001604630) Data frame received for 1 I0203 11:19:39.849287 8 log.go:172] (0xc001604630) (0xc0014cabe0) Stream removed, broadcasting: 5 I0203 11:19:39.849350 8 log.go:172] (0xc0014cab40) (1) Data frame handling I0203 11:19:39.849372 8 log.go:172] (0xc0014cab40) (1) Data frame sent I0203 11:19:39.849386 8 log.go:172] (0xc001604630) (0xc0014cab40) Stream removed, broadcasting: 1 I0203 11:19:39.849411 8 log.go:172] (0xc001604630) Go away received I0203 11:19:39.850067 8 log.go:172] (0xc001604630) (0xc0014cab40) Stream removed, broadcasting: 1 I0203 11:19:39.850122 8 log.go:172] (0xc001604630) (0xc0021d2320) Stream removed, broadcasting: 3 I0203 11:19:39.850160 8 log.go:172] (0xc001604630) (0xc0014cabe0) Stream removed, broadcasting: 5 Feb 3 11:19:39.850: INFO: Exec stderr: "" Feb 3 11:19:39.850: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cq4jf PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 11:19:39.850: INFO: >>> kubeConfig: /root/.kube/config I0203 11:19:39.954090 8 log.go:172] (0xc00093eb00) (0xc000b9f180) Create stream I0203 11:19:39.954349 8 log.go:172] (0xc00093eb00) (0xc000b9f180) Stream added, broadcasting: 1 I0203 11:19:39.962842 8 log.go:172] (0xc00093eb00) Reply frame received for 1 I0203 11:19:39.962964 8 log.go:172] (0xc00093eb00) (0xc0021d23c0) Create stream I0203 11:19:39.962978 8 log.go:172] (0xc00093eb00) (0xc0021d23c0) Stream added, broadcasting: 3 I0203 11:19:39.964080 8 log.go:172] (0xc00093eb00) Reply frame received for 3 I0203 11:19:39.964105 8 log.go:172] (0xc00093eb00) (0xc000b9f2c0) Create stream I0203 11:19:39.964116 8 log.go:172] (0xc00093eb00) (0xc000b9f2c0) Stream added, broadcasting: 5 I0203 11:19:39.964871 8 log.go:172] (0xc00093eb00) Reply frame received for 5 I0203 11:19:40.101896 8 log.go:172] (0xc00093eb00) Data frame received for 3 I0203 11:19:40.101984 8 log.go:172] (0xc0021d23c0) (3) Data frame handling I0203 11:19:40.102008 8 log.go:172] (0xc0021d23c0) (3) Data frame sent I0203 11:19:40.258585 8 log.go:172] (0xc00093eb00) Data frame received for 1 I0203 11:19:40.258786 8 log.go:172] (0xc00093eb00) (0xc0021d23c0) Stream removed, broadcasting: 3 I0203 11:19:40.258920 8 log.go:172] (0xc000b9f180) (1) Data frame handling I0203 11:19:40.258972 8 log.go:172] (0xc000b9f180) (1) Data frame sent I0203 11:19:40.259009 8 log.go:172] (0xc00093eb00) (0xc000b9f180) Stream removed, broadcasting: 1 I0203 11:19:40.259522 8 log.go:172] (0xc00093eb00) (0xc000b9f2c0) Stream removed, broadcasting: 5 I0203 11:19:40.259695 8 log.go:172] (0xc00093eb00) Go away received I0203 11:19:40.259767 8 log.go:172] (0xc00093eb00) (0xc000b9f180) Stream removed, broadcasting: 1 I0203 11:19:40.259799 8 log.go:172] (0xc00093eb00) (0xc0021d23c0) Stream removed, broadcasting: 3 I0203 11:19:40.259817 8 log.go:172] (0xc00093eb00) (0xc000b9f2c0) Stream removed, broadcasting: 5 Feb 3 11:19:40.259: INFO: Exec stderr: "" Feb 3 11:19:40.260: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cq4jf PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 11:19:40.260: INFO: >>> kubeConfig: /root/.kube/config I0203 11:19:40.353218 8 log.go:172] (0xc000ffe9a0) (0xc000905400) Create stream I0203 11:19:40.353395 8 log.go:172] (0xc000ffe9a0) (0xc000905400) Stream added, broadcasting: 1 I0203 11:19:40.359965 8 log.go:172] (0xc000ffe9a0) Reply frame received for 1 I0203 11:19:40.360035 8 log.go:172] (0xc000ffe9a0) (0xc0021d2460) Create stream I0203 11:19:40.360058 8 log.go:172] (0xc000ffe9a0) (0xc0021d2460) Stream added, broadcasting: 3 I0203 11:19:40.361509 8 log.go:172] (0xc000ffe9a0) Reply frame received for 3 I0203 11:19:40.361561 8 log.go:172] (0xc000ffe9a0) (0xc0021d2500) Create stream I0203 11:19:40.361580 8 log.go:172] (0xc000ffe9a0) (0xc0021d2500) Stream added, broadcasting: 5 I0203 11:19:40.363900 8 log.go:172] (0xc000ffe9a0) Reply frame received for 5 I0203 11:19:40.479321 8 log.go:172] (0xc000ffe9a0) Data frame received for 3 I0203 11:19:40.479582 8 log.go:172] (0xc0021d2460) (3) Data frame handling I0203 11:19:40.479640 8 log.go:172] (0xc0021d2460) (3) Data frame sent I0203 11:19:40.712980 8 log.go:172] (0xc000ffe9a0) Data frame received for 1 I0203 11:19:40.713267 8 log.go:172] (0xc000ffe9a0) (0xc0021d2500) Stream removed, broadcasting: 5 I0203 11:19:40.713347 8 log.go:172] (0xc000905400) (1) Data frame handling I0203 11:19:40.713371 8 log.go:172] (0xc000905400) (1) Data frame sent I0203 11:19:40.713442 8 log.go:172] (0xc000ffe9a0) (0xc0021d2460) Stream removed, broadcasting: 3 I0203 11:19:40.713486 8 log.go:172] (0xc000ffe9a0) (0xc000905400) Stream removed, broadcasting: 1 I0203 11:19:40.713523 8 log.go:172] (0xc000ffe9a0) Go away received I0203 11:19:40.714773 8 log.go:172] (0xc000ffe9a0) (0xc000905400) Stream removed, broadcasting: 1 I0203 11:19:40.714909 8 log.go:172] (0xc000ffe9a0) (0xc0021d2460) Stream removed, broadcasting: 3 I0203 11:19:40.714934 8 log.go:172] (0xc000ffe9a0) (0xc0021d2500) Stream removed, broadcasting: 5 Feb 3 11:19:40.714: INFO: Exec stderr: "" Feb 3 11:19:40.715: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cq4jf PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 11:19:40.715: INFO: >>> kubeConfig: /root/.kube/config I0203 11:19:40.799415 8 log.go:172] (0xc000aae2c0) (0xc0021d2820) Create stream I0203 11:19:40.799578 8 log.go:172] (0xc000aae2c0) (0xc0021d2820) Stream added, broadcasting: 1 I0203 11:19:40.803946 8 log.go:172] (0xc000aae2c0) Reply frame received for 1 I0203 11:19:40.804010 8 log.go:172] (0xc000aae2c0) (0xc000905540) Create stream I0203 11:19:40.804021 8 log.go:172] (0xc000aae2c0) (0xc000905540) Stream added, broadcasting: 3 I0203 11:19:40.805223 8 log.go:172] (0xc000aae2c0) Reply frame received for 3 I0203 11:19:40.805249 8 log.go:172] (0xc000aae2c0) (0xc0021d28c0) Create stream I0203 11:19:40.805256 8 log.go:172] (0xc000aae2c0) (0xc0021d28c0) Stream added, broadcasting: 5 I0203 11:19:40.806239 8 log.go:172] (0xc000aae2c0) Reply frame received for 5 I0203 11:19:40.913433 8 log.go:172] (0xc000aae2c0) Data frame received for 3 I0203 11:19:40.913664 8 log.go:172] (0xc000905540) (3) Data frame handling I0203 11:19:40.913743 8 log.go:172] (0xc000905540) (3) Data frame sent I0203 11:19:41.046676 8 log.go:172] (0xc000aae2c0) Data frame received for 1 I0203 11:19:41.046823 8 log.go:172] (0xc000aae2c0) (0xc000905540) Stream removed, broadcasting: 3 I0203 11:19:41.046895 8 log.go:172] (0xc0021d2820) (1) Data frame handling I0203 11:19:41.046924 8 log.go:172] (0xc0021d2820) (1) Data frame sent I0203 11:19:41.047009 8 log.go:172] (0xc000aae2c0) (0xc0021d28c0) Stream removed, broadcasting: 5 I0203 11:19:41.047078 8 log.go:172] (0xc000aae2c0) (0xc0021d2820) Stream removed, broadcasting: 1 I0203 11:19:41.047100 8 log.go:172] (0xc000aae2c0) Go away received I0203 11:19:41.047425 8 log.go:172] (0xc000aae2c0) (0xc0021d2820) Stream removed, broadcasting: 1 I0203 11:19:41.047443 8 log.go:172] (0xc000aae2c0) (0xc000905540) Stream removed, broadcasting: 3 I0203 11:19:41.047455 8 log.go:172] (0xc000aae2c0) (0xc0021d28c0) Stream removed, broadcasting: 5 Feb 3 11:19:41.047: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 3 11:19:41.047: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cq4jf PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 11:19:41.047: INFO: >>> kubeConfig: /root/.kube/config I0203 11:19:41.139709 8 log.go:172] (0xc000aae790) (0xc0021d2c80) Create stream I0203 11:19:41.139835 8 log.go:172] (0xc000aae790) (0xc0021d2c80) Stream added, broadcasting: 1 I0203 11:19:41.145082 8 log.go:172] (0xc000aae790) Reply frame received for 1 I0203 11:19:41.145148 8 log.go:172] (0xc000aae790) (0xc000b9f540) Create stream I0203 11:19:41.145172 8 log.go:172] (0xc000aae790) (0xc000b9f540) Stream added, broadcasting: 3 I0203 11:19:41.147260 8 log.go:172] (0xc000aae790) Reply frame received for 3 I0203 11:19:41.147488 8 log.go:172] (0xc000aae790) (0xc001bdd540) Create stream I0203 11:19:41.147526 8 log.go:172] (0xc000aae790) (0xc001bdd540) Stream added, broadcasting: 5 I0203 11:19:41.149520 8 log.go:172] (0xc000aae790) Reply frame received for 5 I0203 11:19:41.306973 8 log.go:172] (0xc000aae790) Data frame received for 3 I0203 11:19:41.307051 8 log.go:172] (0xc000b9f540) (3) Data frame handling I0203 11:19:41.307082 8 log.go:172] (0xc000b9f540) (3) Data frame sent I0203 11:19:41.445940 8 log.go:172] (0xc000aae790) Data frame received for 1 I0203 11:19:41.446090 8 log.go:172] (0xc000aae790) (0xc000b9f540) Stream removed, broadcasting: 3 I0203 11:19:41.446244 8 log.go:172] (0xc0021d2c80) (1) Data frame handling I0203 11:19:41.446262 8 log.go:172] (0xc0021d2c80) (1) Data frame sent I0203 11:19:41.446356 8 log.go:172] (0xc000aae790) (0xc001bdd540) Stream removed, broadcasting: 5 I0203 11:19:41.446390 8 log.go:172] (0xc000aae790) (0xc0021d2c80) Stream removed, broadcasting: 1 I0203 11:19:41.446405 8 log.go:172] (0xc000aae790) Go away received I0203 11:19:41.446954 8 log.go:172] (0xc000aae790) (0xc0021d2c80) Stream removed, broadcasting: 1 I0203 11:19:41.446977 8 log.go:172] (0xc000aae790) (0xc000b9f540) Stream removed, broadcasting: 3 I0203 11:19:41.446985 8 log.go:172] (0xc000aae790) (0xc001bdd540) Stream removed, broadcasting: 5 Feb 3 11:19:41.447: INFO: Exec stderr: "" Feb 3 11:19:41.447: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cq4jf PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 11:19:41.447: INFO: >>> kubeConfig: /root/.kube/config I0203 11:19:41.532627 8 log.go:172] (0xc00177e2c0) (0xc001bddc20) Create stream I0203 11:19:41.532739 8 log.go:172] (0xc00177e2c0) (0xc001bddc20) Stream added, broadcasting: 1 I0203 11:19:41.537850 8 log.go:172] (0xc00177e2c0) Reply frame received for 1 I0203 11:19:41.537927 8 log.go:172] (0xc00177e2c0) (0xc001bddcc0) Create stream I0203 11:19:41.537942 8 log.go:172] (0xc00177e2c0) (0xc001bddcc0) Stream added, broadcasting: 3 I0203 11:19:41.538937 8 log.go:172] (0xc00177e2c0) Reply frame received for 3 I0203 11:19:41.538962 8 log.go:172] (0xc00177e2c0) (0xc001f2a320) Create stream I0203 11:19:41.538971 8 log.go:172] (0xc00177e2c0) (0xc001f2a320) Stream added, broadcasting: 5 I0203 11:19:41.543749 8 log.go:172] (0xc00177e2c0) Reply frame received for 5 I0203 11:19:41.649130 8 log.go:172] (0xc00177e2c0) Data frame received for 3 I0203 11:19:41.649217 8 log.go:172] (0xc001bddcc0) (3) Data frame handling I0203 11:19:41.649235 8 log.go:172] (0xc001bddcc0) (3) Data frame sent I0203 11:19:41.766063 8 log.go:172] (0xc00177e2c0) (0xc001bddcc0) Stream removed, broadcasting: 3 I0203 11:19:41.766209 8 log.go:172] (0xc00177e2c0) Data frame received for 1 I0203 11:19:41.766259 8 log.go:172] (0xc001bddc20) (1) Data frame handling I0203 11:19:41.766294 8 log.go:172] (0xc001bddc20) (1) Data frame sent I0203 11:19:41.766331 8 log.go:172] (0xc00177e2c0) (0xc001f2a320) Stream removed, broadcasting: 5 I0203 11:19:41.766398 8 log.go:172] (0xc00177e2c0) (0xc001bddc20) Stream removed, broadcasting: 1 I0203 11:19:41.766413 8 log.go:172] (0xc00177e2c0) Go away received I0203 11:19:41.766766 8 log.go:172] (0xc00177e2c0) (0xc001bddc20) Stream removed, broadcasting: 1 I0203 11:19:41.766788 8 log.go:172] (0xc00177e2c0) (0xc001bddcc0) Stream removed, broadcasting: 3 I0203 11:19:41.766805 8 log.go:172] (0xc00177e2c0) (0xc001f2a320) Stream removed, broadcasting: 5 Feb 3 11:19:41.766: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 3 11:19:41.766: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cq4jf PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 11:19:41.767: INFO: >>> kubeConfig: /root/.kube/config I0203 11:19:41.868807 8 log.go:172] (0xc00177e790) (0xc001a66000) Create stream I0203 11:19:41.869113 8 log.go:172] (0xc00177e790) (0xc001a66000) Stream added, broadcasting: 1 I0203 11:19:41.901329 8 log.go:172] (0xc00177e790) Reply frame received for 1 I0203 11:19:41.901532 8 log.go:172] (0xc00177e790) (0xc0001114a0) Create stream I0203 11:19:41.901544 8 log.go:172] (0xc00177e790) (0xc0001114a0) Stream added, broadcasting: 3 I0203 11:19:41.903090 8 log.go:172] (0xc00177e790) Reply frame received for 3 I0203 11:19:41.903171 8 log.go:172] (0xc00177e790) (0xc000358000) Create stream I0203 11:19:41.903181 8 log.go:172] (0xc00177e790) (0xc000358000) Stream added, broadcasting: 5 I0203 11:19:41.904329 8 log.go:172] (0xc00177e790) Reply frame received for 5 I0203 11:19:42.026447 8 log.go:172] (0xc00177e790) Data frame received for 3 I0203 11:19:42.026589 8 log.go:172] (0xc0001114a0) (3) Data frame handling I0203 11:19:42.026634 8 log.go:172] (0xc0001114a0) (3) Data frame sent I0203 11:19:42.207433 8 log.go:172] (0xc00177e790) Data frame received for 1 I0203 11:19:42.207700 8 log.go:172] (0xc00177e790) (0xc000358000) Stream removed, broadcasting: 5 I0203 11:19:42.207897 8 log.go:172] (0xc001a66000) (1) Data frame handling I0203 11:19:42.208049 8 log.go:172] (0xc001a66000) (1) Data frame sent I0203 11:19:42.208132 8 log.go:172] (0xc00177e790) (0xc0001114a0) Stream removed, broadcasting: 3 I0203 11:19:42.208233 8 log.go:172] (0xc00177e790) (0xc001a66000) Stream removed, broadcasting: 1 I0203 11:19:42.208268 8 log.go:172] (0xc00177e790) Go away received I0203 11:19:42.208759 8 log.go:172] (0xc00177e790) (0xc001a66000) Stream removed, broadcasting: 1 I0203 11:19:42.208778 8 log.go:172] (0xc00177e790) (0xc0001114a0) Stream removed, broadcasting: 3 I0203 11:19:42.208802 8 log.go:172] (0xc00177e790) (0xc000358000) Stream removed, broadcasting: 5 Feb 3 11:19:42.208: INFO: Exec stderr: "" Feb 3 11:19:42.209: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cq4jf PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 11:19:42.209: INFO: >>> kubeConfig: /root/.kube/config I0203 11:19:42.348198 8 log.go:172] (0xc0016044d0) (0xc00045e780) Create stream I0203 11:19:42.348385 8 log.go:172] (0xc0016044d0) (0xc00045e780) Stream added, broadcasting: 1 I0203 11:19:42.355096 8 log.go:172] (0xc0016044d0) Reply frame received for 1 I0203 11:19:42.355174 8 log.go:172] (0xc0016044d0) (0xc00045f7c0) Create stream I0203 11:19:42.355200 8 log.go:172] (0xc0016044d0) (0xc00045f7c0) Stream added, broadcasting: 3 I0203 11:19:42.357019 8 log.go:172] (0xc0016044d0) Reply frame received for 3 I0203 11:19:42.357094 8 log.go:172] (0xc0016044d0) (0xc0001a2b40) Create stream I0203 11:19:42.357120 8 log.go:172] (0xc0016044d0) (0xc0001a2b40) Stream added, broadcasting: 5 I0203 11:19:42.359408 8 log.go:172] (0xc0016044d0) Reply frame received for 5 I0203 11:19:42.601393 8 log.go:172] (0xc0016044d0) Data frame received for 3 I0203 11:19:42.601543 8 log.go:172] (0xc00045f7c0) (3) Data frame handling I0203 11:19:42.601583 8 log.go:172] (0xc00045f7c0) (3) Data frame sent I0203 11:19:42.830572 8 log.go:172] (0xc0016044d0) Data frame received for 1 I0203 11:19:42.830718 8 log.go:172] (0xc00045e780) (1) Data frame handling I0203 11:19:42.830746 8 log.go:172] (0xc00045e780) (1) Data frame sent I0203 11:19:42.830787 8 log.go:172] (0xc0016044d0) (0xc00045e780) Stream removed, broadcasting: 1 I0203 11:19:42.830962 8 log.go:172] (0xc0016044d0) (0xc0001a2b40) Stream removed, broadcasting: 5 I0203 11:19:42.831084 8 log.go:172] (0xc0016044d0) (0xc00045f7c0) Stream removed, broadcasting: 3 I0203 11:19:42.831142 8 log.go:172] (0xc0016044d0) Go away received I0203 11:19:42.831308 8 log.go:172] (0xc0016044d0) (0xc00045e780) Stream removed, broadcasting: 1 I0203 11:19:42.831342 8 log.go:172] (0xc0016044d0) (0xc00045f7c0) Stream removed, broadcasting: 3 I0203 11:19:42.831351 8 log.go:172] (0xc0016044d0) (0xc0001a2b40) Stream removed, broadcasting: 5 Feb 3 11:19:42.831: INFO: Exec stderr: "" Feb 3 11:19:42.831: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cq4jf PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 11:19:42.831: INFO: >>> kubeConfig: /root/.kube/config I0203 11:19:42.901336 8 log.go:172] (0xc00093eb00) (0xc0001a3b80) Create stream I0203 11:19:42.901497 8 log.go:172] (0xc00093eb00) (0xc0001a3b80) Stream added, broadcasting: 1 I0203 11:19:42.907633 8 log.go:172] (0xc00093eb00) Reply frame received for 1 I0203 11:19:42.907675 8 log.go:172] (0xc00093eb00) (0xc0001119a0) Create stream I0203 11:19:42.907685 8 log.go:172] (0xc00093eb00) (0xc0001119a0) Stream added, broadcasting: 3 I0203 11:19:42.908782 8 log.go:172] (0xc00093eb00) Reply frame received for 3 I0203 11:19:42.908824 8 log.go:172] (0xc00093eb00) (0xc00053c0a0) Create stream I0203 11:19:42.908835 8 log.go:172] (0xc00093eb00) (0xc00053c0a0) Stream added, broadcasting: 5 I0203 11:19:42.910167 8 log.go:172] (0xc00093eb00) Reply frame received for 5 I0203 11:19:42.999002 8 log.go:172] (0xc00093eb00) Data frame received for 3 I0203 11:19:42.999069 8 log.go:172] (0xc0001119a0) (3) Data frame handling I0203 11:19:42.999096 8 log.go:172] (0xc0001119a0) (3) Data frame sent I0203 11:19:43.098635 8 log.go:172] (0xc00093eb00) Data frame received for 1 I0203 11:19:43.098816 8 log.go:172] (0xc00093eb00) (0xc0001119a0) Stream removed, broadcasting: 3 I0203 11:19:43.098902 8 log.go:172] (0xc0001a3b80) (1) Data frame handling I0203 11:19:43.098964 8 log.go:172] (0xc0001a3b80) (1) Data frame sent I0203 11:19:43.099013 8 log.go:172] (0xc00093eb00) (0xc00053c0a0) Stream removed, broadcasting: 5 I0203 11:19:43.099072 8 log.go:172] (0xc00093eb00) (0xc0001a3b80) Stream removed, broadcasting: 1 I0203 11:19:43.099089 8 log.go:172] (0xc00093eb00) Go away received I0203 11:19:43.099456 8 log.go:172] (0xc00093eb00) (0xc0001a3b80) Stream removed, broadcasting: 1 I0203 11:19:43.099503 8 log.go:172] (0xc00093eb00) (0xc0001119a0) Stream removed, broadcasting: 3 I0203 11:19:43.099532 8 log.go:172] (0xc00093eb00) (0xc00053c0a0) Stream removed, broadcasting: 5 Feb 3 11:19:43.099: INFO: Exec stderr: "" Feb 3 11:19:43.099: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cq4jf PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 11:19:43.099: INFO: >>> kubeConfig: /root/.kube/config I0203 11:19:43.224104 8 log.go:172] (0xc0016049a0) (0xc0008e21e0) Create stream I0203 11:19:43.224267 8 log.go:172] (0xc0016049a0) (0xc0008e21e0) Stream added, broadcasting: 1 I0203 11:19:43.232100 8 log.go:172] (0xc0016049a0) Reply frame received for 1 I0203 11:19:43.232232 8 log.go:172] (0xc0016049a0) (0xc001b34000) Create stream I0203 11:19:43.232244 8 log.go:172] (0xc0016049a0) (0xc001b34000) Stream added, broadcasting: 3 I0203 11:19:43.233487 8 log.go:172] (0xc0016049a0) Reply frame received for 3 I0203 11:19:43.233527 8 log.go:172] (0xc0016049a0) (0xc000c90000) Create stream I0203 11:19:43.233542 8 log.go:172] (0xc0016049a0) (0xc000c90000) Stream added, broadcasting: 5 I0203 11:19:43.234993 8 log.go:172] (0xc0016049a0) Reply frame received for 5 I0203 11:19:43.338127 8 log.go:172] (0xc0016049a0) Data frame received for 3 I0203 11:19:43.338275 8 log.go:172] (0xc001b34000) (3) Data frame handling I0203 11:19:43.338325 8 log.go:172] (0xc001b34000) (3) Data frame sent I0203 11:19:43.441914 8 log.go:172] (0xc0016049a0) Data frame received for 1 I0203 11:19:43.441982 8 log.go:172] (0xc0016049a0) (0xc001b34000) Stream removed, broadcasting: 3 I0203 11:19:43.442025 8 log.go:172] (0xc0008e21e0) (1) Data frame handling I0203 11:19:43.442037 8 log.go:172] (0xc0008e21e0) (1) Data frame sent I0203 11:19:43.442047 8 log.go:172] (0xc0016049a0) (0xc0008e21e0) Stream removed, broadcasting: 1 I0203 11:19:43.442091 8 log.go:172] (0xc0016049a0) (0xc000c90000) Stream removed, broadcasting: 5 I0203 11:19:43.442194 8 log.go:172] (0xc0016049a0) Go away received I0203 11:19:43.442222 8 log.go:172] (0xc0016049a0) (0xc0008e21e0) Stream removed, broadcasting: 1 I0203 11:19:43.442233 8 log.go:172] (0xc0016049a0) (0xc001b34000) Stream removed, broadcasting: 3 I0203 11:19:43.442241 8 log.go:172] (0xc0016049a0) (0xc000c90000) Stream removed, broadcasting: 5 Feb 3 11:19:43.442: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:19:43.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-cq4jf" for this suite. Feb 3 11:20:35.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:20:35.576: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-cq4jf, resource: bindings, ignored listing per whitelist Feb 3 11:20:35.671: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-cq4jf deletion completed in 52.212317899s • [SLOW TEST:80.930 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:20:35.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Feb 3 11:20:35.945: INFO: Waiting up to 5m0s for pod "client-containers-3304e6b1-4677-11ea-ab15-0242ac110005" in namespace "e2e-tests-containers-tvgrg" to be "success or failure" Feb 3 11:20:35.967: INFO: Pod "client-containers-3304e6b1-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.211549ms Feb 3 11:20:38.158: INFO: Pod "client-containers-3304e6b1-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212450018s Feb 3 11:20:40.180: INFO: Pod "client-containers-3304e6b1-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234952985s Feb 3 11:20:42.363: INFO: Pod "client-containers-3304e6b1-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417380832s Feb 3 11:20:44.405: INFO: Pod "client-containers-3304e6b1-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.45959786s Feb 3 11:20:46.421: INFO: Pod "client-containers-3304e6b1-4677-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.47595769s STEP: Saw pod success Feb 3 11:20:46.421: INFO: Pod "client-containers-3304e6b1-4677-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:20:46.437: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-3304e6b1-4677-11ea-ab15-0242ac110005 container test-container: STEP: delete the pod Feb 3 11:20:46.653: INFO: Waiting for pod client-containers-3304e6b1-4677-11ea-ab15-0242ac110005 to disappear Feb 3 11:20:46.724: INFO: Pod client-containers-3304e6b1-4677-11ea-ab15-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:20:46.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-tvgrg" for this suite. Feb 3 11:20:52.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:20:52.853: INFO: namespace: e2e-tests-containers-tvgrg, resource: bindings, ignored listing per whitelist Feb 3 11:20:52.928: INFO: namespace e2e-tests-containers-tvgrg deletion completed in 6.180300063s • [SLOW TEST:17.257 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:20:52.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:21:03.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-9646l" for this suite. Feb 3 11:21:09.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:21:10.064: INFO: namespace: e2e-tests-emptydir-wrapper-9646l, resource: bindings, ignored listing per whitelist Feb 3 11:21:10.152: INFO: namespace e2e-tests-emptydir-wrapper-9646l deletion completed in 6.320090262s • [SLOW TEST:17.224 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:21:10.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-47990dc3-4677-11ea-ab15-0242ac110005 STEP: Creating a pod to test consume secrets Feb 3 11:21:10.485: INFO: Waiting up to 5m0s for pod "pod-secrets-479bd0b4-4677-11ea-ab15-0242ac110005" in namespace "e2e-tests-secrets-nm6fl" to be "success or failure" Feb 3 11:21:10.531: INFO: Pod "pod-secrets-479bd0b4-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.289177ms Feb 3 11:21:12.567: INFO: Pod "pod-secrets-479bd0b4-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081148039s Feb 3 11:21:14.654: INFO: Pod "pod-secrets-479bd0b4-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169017242s Feb 3 11:21:16.760: INFO: Pod "pod-secrets-479bd0b4-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27457179s Feb 3 11:21:18.778: INFO: Pod "pod-secrets-479bd0b4-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.293088973s Feb 3 11:21:20.792: INFO: Pod "pod-secrets-479bd0b4-4677-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.307082372s STEP: Saw pod success Feb 3 11:21:20.793: INFO: Pod "pod-secrets-479bd0b4-4677-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:21:20.802: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-479bd0b4-4677-11ea-ab15-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 3 11:21:21.248: INFO: Waiting for pod pod-secrets-479bd0b4-4677-11ea-ab15-0242ac110005 to disappear Feb 3 11:21:21.507: INFO: Pod pod-secrets-479bd0b4-4677-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:21:21.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nm6fl" for this suite. Feb 3 11:21:27.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:21:27.755: INFO: namespace: e2e-tests-secrets-nm6fl, resource: bindings, ignored listing per whitelist Feb 3 11:21:27.823: INFO: namespace e2e-tests-secrets-nm6fl deletion completed in 6.304258644s • [SLOW TEST:17.670 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:21:27.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-522c51dd-4677-11ea-ab15-0242ac110005 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:21:40.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-z9g6z" for this suite. Feb 3 11:22:02.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:22:02.867: INFO: namespace: e2e-tests-configmap-z9g6z, resource: bindings, ignored listing per whitelist Feb 3 11:22:02.872: INFO: namespace e2e-tests-configmap-z9g6z deletion completed in 22.458861859s • [SLOW TEST:35.048 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:22:02.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-m2mwm STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 3 11:22:03.187: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 3 11:22:31.670: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-m2mwm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 11:22:31.670: INFO: >>> kubeConfig: /root/.kube/config I0203 11:22:31.805504 8 log.go:172] (0xc000ffe790) (0xc0009d3680) Create stream I0203 11:22:31.805627 8 log.go:172] (0xc000ffe790) (0xc0009d3680) Stream added, broadcasting: 1 I0203 11:22:31.817367 8 log.go:172] (0xc000ffe790) Reply frame received for 1 I0203 11:22:31.817413 8 log.go:172] (0xc000ffe790) (0xc0009d3720) Create stream I0203 11:22:31.817428 8 log.go:172] (0xc000ffe790) (0xc0009d3720) Stream added, broadcasting: 3 I0203 11:22:31.819407 8 log.go:172] (0xc000ffe790) Reply frame received for 3 I0203 11:22:31.819499 8 log.go:172] (0xc000ffe790) (0xc001d1be00) Create stream I0203 11:22:31.819526 8 log.go:172] (0xc000ffe790) (0xc001d1be00) Stream added, broadcasting: 5 I0203 11:22:31.824050 8 log.go:172] (0xc000ffe790) Reply frame received for 5 I0203 11:22:32.064589 8 log.go:172] (0xc000ffe790) Data frame received for 3 I0203 11:22:32.064672 8 log.go:172] (0xc0009d3720) (3) Data frame handling I0203 11:22:32.064704 8 log.go:172] (0xc0009d3720) (3) Data frame sent I0203 11:22:32.242002 8 log.go:172] (0xc000ffe790) Data frame received for 1 I0203 11:22:32.242296 8 log.go:172] (0xc000ffe790) (0xc0009d3720) Stream removed, broadcasting: 3 I0203 11:22:32.242409 8 log.go:172] (0xc0009d3680) (1) Data frame handling I0203 11:22:32.242454 8 log.go:172] (0xc0009d3680) (1) Data frame sent I0203 11:22:32.242540 8 log.go:172] (0xc000ffe790) (0xc001d1be00) Stream removed, broadcasting: 5 I0203 11:22:32.242672 8 log.go:172] (0xc000ffe790) (0xc0009d3680) Stream removed, broadcasting: 1 I0203 11:22:32.242708 8 log.go:172] (0xc000ffe790) Go away received I0203 11:22:32.243596 8 log.go:172] (0xc000ffe790) (0xc0009d3680) Stream removed, broadcasting: 1 I0203 11:22:32.243618 8 log.go:172] (0xc000ffe790) (0xc0009d3720) Stream removed, broadcasting: 3 I0203 11:22:32.243639 8 log.go:172] (0xc000ffe790) (0xc001d1be00) Stream removed, broadcasting: 5 Feb 3 11:22:32.243: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:22:32.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-m2mwm" for this suite. Feb 3 11:22:56.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:22:56.572: INFO: namespace: e2e-tests-pod-network-test-m2mwm, resource: bindings, ignored listing per whitelist Feb 3 11:22:56.645: INFO: namespace e2e-tests-pod-network-test-m2mwm deletion completed in 24.382271066s • [SLOW TEST:53.773 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:22:56.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 3 11:22:56.765: INFO: Waiting up to 5m0s for pod "pod-86f532ba-4677-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-6w782" to be "success or failure" Feb 3 11:22:56.771: INFO: Pod "pod-86f532ba-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.910473ms Feb 3 11:22:58.785: INFO: Pod "pod-86f532ba-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020232399s Feb 3 11:23:00.805: INFO: Pod "pod-86f532ba-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039865247s Feb 3 11:23:03.229: INFO: Pod "pod-86f532ba-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.463988918s Feb 3 11:23:05.252: INFO: Pod "pod-86f532ba-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.487221726s Feb 3 11:23:07.322: INFO: Pod "pod-86f532ba-4677-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.557573395s STEP: Saw pod success Feb 3 11:23:07.322: INFO: Pod "pod-86f532ba-4677-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:23:07.354: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-86f532ba-4677-11ea-ab15-0242ac110005 container test-container: STEP: delete the pod Feb 3 11:23:08.071: INFO: Waiting for pod pod-86f532ba-4677-11ea-ab15-0242ac110005 to disappear Feb 3 11:23:08.079: INFO: Pod pod-86f532ba-4677-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:23:08.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6w782" for this suite. Feb 3 11:23:14.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:23:14.229: INFO: namespace: e2e-tests-emptydir-6w782, resource: bindings, ignored listing per whitelist Feb 3 11:23:14.348: INFO: namespace e2e-tests-emptydir-6w782 deletion completed in 6.263973601s • [SLOW TEST:17.703 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:23:14.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 3 11:23:32.878: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:32.932: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:23:34.932: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:34.946: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:23:36.932: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:36.953: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:23:38.933: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:38.962: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:23:40.932: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:41.042: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:23:42.932: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:42.952: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:23:44.932: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:44.956: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:23:46.932: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:46.963: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:23:48.932: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:48.949: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:23:50.933: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:50.948: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:23:52.932: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:52.958: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:23:54.932: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:54.958: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:23:56.932: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:56.951: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:23:58.932: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:23:58.961: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:24:00.933: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:24:00.988: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 11:24:02.932: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 11:24:02.946: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:24:03.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-n6pk6" for this suite. Feb 3 11:24:27.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:24:27.240: INFO: namespace: e2e-tests-container-lifecycle-hook-n6pk6, resource: bindings, ignored listing per whitelist Feb 3 11:24:27.364: INFO: namespace e2e-tests-container-lifecycle-hook-n6pk6 deletion completed in 24.327233587s • [SLOW TEST:73.015 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:24:27.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-bd13d172-4677-11ea-ab15-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 3 11:24:27.612: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bd14a92b-4677-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-rqsdb" to be "success or failure" Feb 3 11:24:27.666: INFO: Pod "pod-projected-configmaps-bd14a92b-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 53.702585ms Feb 3 11:24:29.687: INFO: Pod "pod-projected-configmaps-bd14a92b-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07458384s Feb 3 11:24:31.703: INFO: Pod "pod-projected-configmaps-bd14a92b-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091141105s Feb 3 11:24:33.750: INFO: Pod "pod-projected-configmaps-bd14a92b-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137923642s Feb 3 11:24:36.174: INFO: Pod "pod-projected-configmaps-bd14a92b-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.562323274s Feb 3 11:24:38.190: INFO: Pod "pod-projected-configmaps-bd14a92b-4677-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.577734524s STEP: Saw pod success Feb 3 11:24:38.190: INFO: Pod "pod-projected-configmaps-bd14a92b-4677-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:24:38.204: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-bd14a92b-4677-11ea-ab15-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 3 11:24:38.631: INFO: Waiting for pod pod-projected-configmaps-bd14a92b-4677-11ea-ab15-0242ac110005 to disappear Feb 3 11:24:38.757: INFO: Pod pod-projected-configmaps-bd14a92b-4677-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:24:38.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rqsdb" for this suite. Feb 3 11:24:46.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:24:47.017: INFO: namespace: e2e-tests-projected-rqsdb, resource: bindings, ignored listing per whitelist Feb 3 11:24:47.025: INFO: namespace e2e-tests-projected-rqsdb deletion completed in 8.256532039s • [SLOW TEST:19.661 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:24:47.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 3 11:24:47.251: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 3 11:24:52.266: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:24:53.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-nzrpc" for this suite. Feb 3 11:25:04.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:25:04.391: INFO: namespace: e2e-tests-replication-controller-nzrpc, resource: bindings, ignored listing per whitelist Feb 3 11:25:04.436: INFO: namespace e2e-tests-replication-controller-nzrpc deletion completed in 11.00521454s • [SLOW TEST:17.411 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:25:04.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Feb 3 11:25:04.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 3 11:25:04.920: INFO: stderr: "" Feb 3 11:25:04.921: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:25:04.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6qmkk" for this suite. Feb 3 11:25:11.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:25:11.172: INFO: namespace: e2e-tests-kubectl-6qmkk, resource: bindings, ignored listing per whitelist Feb 3 11:25:11.294: INFO: namespace e2e-tests-kubectl-6qmkk deletion completed in 6.357242505s • [SLOW TEST:6.858 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:25:11.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 3 11:25:19.536: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-d743bb26-4677-11ea-ab15-0242ac110005,GenerateName:,Namespace:e2e-tests-events-dbcpv,SelfLink:/api/v1/namespaces/e2e-tests-events-dbcpv/pods/send-events-d743bb26-4677-11ea-ab15-0242ac110005,UID:d744dbed-4677-11ea-a994-fa163e34d433,ResourceVersion:20409403,Generation:0,CreationTimestamp:2020-02-03 11:25:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 481444340,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-whs7x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-whs7x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-whs7x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cc8ed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cc8ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:25:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:25:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:25:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:25:11 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-03 11:25:11 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-03 11:25:19 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://69e1d483390eb845e813643a34e4a5ad92770bf539be38f19dbb81a88598342f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 3 11:25:21.556: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 3 11:25:23.580: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:25:23.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-dbcpv" for this suite. Feb 3 11:26:03.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:26:03.953: INFO: namespace: e2e-tests-events-dbcpv, resource: bindings, ignored listing per whitelist Feb 3 11:26:04.070: INFO: namespace e2e-tests-events-dbcpv deletion completed in 40.418450904s • [SLOW TEST:52.774 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:26:04.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-f6c18bb6-4677-11ea-ab15-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 3 11:26:04.442: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f6d26d38-4677-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-6mxlg" to be "success or failure" Feb 3 11:26:04.485: INFO: Pod "pod-projected-configmaps-f6d26d38-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.586079ms Feb 3 11:26:06.577: INFO: Pod "pod-projected-configmaps-f6d26d38-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135393437s Feb 3 11:26:08.593: INFO: Pod "pod-projected-configmaps-f6d26d38-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15118243s Feb 3 11:26:10.615: INFO: Pod "pod-projected-configmaps-f6d26d38-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172732536s Feb 3 11:26:12.842: INFO: Pod "pod-projected-configmaps-f6d26d38-4677-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.3996871s Feb 3 11:26:14.873: INFO: Pod "pod-projected-configmaps-f6d26d38-4677-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.430768657s STEP: Saw pod success Feb 3 11:26:14.873: INFO: Pod "pod-projected-configmaps-f6d26d38-4677-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:26:14.885: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f6d26d38-4677-11ea-ab15-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 3 11:26:15.047: INFO: Waiting for pod pod-projected-configmaps-f6d26d38-4677-11ea-ab15-0242ac110005 to disappear Feb 3 11:26:15.116: INFO: Pod pod-projected-configmaps-f6d26d38-4677-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:26:15.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6mxlg" for this suite. Feb 3 11:26:21.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:26:21.244: INFO: namespace: e2e-tests-projected-6mxlg, resource: bindings, ignored listing per whitelist Feb 3 11:26:21.334: INFO: namespace e2e-tests-projected-6mxlg deletion completed in 6.197466026s • [SLOW TEST:17.264 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:26:21.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0108483f-4678-11ea-ab15-0242ac110005 STEP: Creating a pod to test consume secrets Feb 3 11:26:21.602: INFO: Waiting up to 5m0s for pod "pod-secrets-010d1d71-4678-11ea-ab15-0242ac110005" in namespace "e2e-tests-secrets-44vn6" to be "success or failure" Feb 3 11:26:21.628: INFO: Pod "pod-secrets-010d1d71-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.138503ms Feb 3 11:26:23.638: INFO: Pod "pod-secrets-010d1d71-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035921411s Feb 3 11:26:25.656: INFO: Pod "pod-secrets-010d1d71-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053768067s Feb 3 11:26:27.669: INFO: Pod "pod-secrets-010d1d71-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067293929s Feb 3 11:26:29.680: INFO: Pod "pod-secrets-010d1d71-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078404787s Feb 3 11:26:31.698: INFO: Pod "pod-secrets-010d1d71-4678-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095977436s STEP: Saw pod success Feb 3 11:26:31.698: INFO: Pod "pod-secrets-010d1d71-4678-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:26:31.704: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-010d1d71-4678-11ea-ab15-0242ac110005 container secret-env-test: STEP: delete the pod Feb 3 11:26:31.960: INFO: Waiting for pod pod-secrets-010d1d71-4678-11ea-ab15-0242ac110005 to disappear Feb 3 11:26:32.222: INFO: Pod pod-secrets-010d1d71-4678-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:26:32.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-44vn6" for this suite. Feb 3 11:26:38.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:26:38.726: INFO: namespace: e2e-tests-secrets-44vn6, resource: bindings, ignored listing per whitelist Feb 3 11:26:38.730: INFO: namespace e2e-tests-secrets-44vn6 deletion completed in 6.49230476s • [SLOW TEST:17.396 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:26:38.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-0b685f3f-4678-11ea-ab15-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 3 11:26:39.055: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b73b9da-4678-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-clcnk" to be "success or failure" Feb 3 11:26:39.073: INFO: Pod "pod-projected-configmaps-0b73b9da-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.578723ms Feb 3 11:26:41.254: INFO: Pod "pod-projected-configmaps-0b73b9da-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198921772s Feb 3 11:26:43.273: INFO: Pod "pod-projected-configmaps-0b73b9da-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218002088s Feb 3 11:26:45.516: INFO: Pod "pod-projected-configmaps-0b73b9da-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.460814675s Feb 3 11:26:47.892: INFO: Pod "pod-projected-configmaps-0b73b9da-4678-11ea-ab15-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.836871361s Feb 3 11:26:49.922: INFO: Pod "pod-projected-configmaps-0b73b9da-4678-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.867291113s STEP: Saw pod success Feb 3 11:26:49.922: INFO: Pod "pod-projected-configmaps-0b73b9da-4678-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:26:49.934: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-0b73b9da-4678-11ea-ab15-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 3 11:26:50.035: INFO: Waiting for pod pod-projected-configmaps-0b73b9da-4678-11ea-ab15-0242ac110005 to disappear Feb 3 11:26:50.204: INFO: Pod pod-projected-configmaps-0b73b9da-4678-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:26:50.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-clcnk" for this suite. Feb 3 11:26:57.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:26:57.101: INFO: namespace: e2e-tests-projected-clcnk, resource: bindings, ignored listing per whitelist Feb 3 11:26:57.163: INFO: namespace e2e-tests-projected-clcnk deletion completed in 6.939917881s • [SLOW TEST:18.432 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:26:57.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 3 11:27:27.455: INFO: Container started at 2020-02-03 11:27:05 +0000 UTC, pod became ready at 2020-02-03 11:27:26 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:27:27.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-lvhmg" for this suite. Feb 3 11:27:51.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:27:51.567: INFO: namespace: e2e-tests-container-probe-lvhmg, resource: bindings, ignored listing per whitelist Feb 3 11:27:51.763: INFO: namespace e2e-tests-container-probe-lvhmg deletion completed in 24.300031946s • [SLOW TEST:54.600 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:27:51.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 3 11:27:51.973: INFO: Waiting up to 5m0s for pod "pod-36ea6e7c-4678-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-cwzjv" to be "success or failure" Feb 3 11:27:52.000: INFO: Pod "pod-36ea6e7c-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.278532ms Feb 3 11:27:54.018: INFO: Pod "pod-36ea6e7c-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045752132s Feb 3 11:27:56.032: INFO: Pod "pod-36ea6e7c-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059128157s Feb 3 11:27:58.439: INFO: Pod "pod-36ea6e7c-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.466527892s Feb 3 11:28:00.466: INFO: Pod "pod-36ea6e7c-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.493307488s Feb 3 11:28:02.613: INFO: Pod "pod-36ea6e7c-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.640308221s Feb 3 11:28:04.705: INFO: Pod "pod-36ea6e7c-4678-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.732503884s STEP: Saw pod success Feb 3 11:28:04.706: INFO: Pod "pod-36ea6e7c-4678-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:28:04.777: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-36ea6e7c-4678-11ea-ab15-0242ac110005 container test-container: STEP: delete the pod Feb 3 11:28:04.983: INFO: Waiting for pod pod-36ea6e7c-4678-11ea-ab15-0242ac110005 to disappear Feb 3 11:28:04.988: INFO: Pod pod-36ea6e7c-4678-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:28:04.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cwzjv" for this suite. Feb 3 11:28:11.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:28:11.243: INFO: namespace: e2e-tests-emptydir-cwzjv, resource: bindings, ignored listing per whitelist Feb 3 11:28:11.358: INFO: namespace e2e-tests-emptydir-cwzjv deletion completed in 6.361269327s • [SLOW TEST:19.596 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:28:11.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 3 11:28:11.562: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42976435-4678-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-688w7" to be "success or failure" Feb 3 11:28:11.587: INFO: Pod "downwardapi-volume-42976435-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.513836ms Feb 3 11:28:13.621: INFO: Pod "downwardapi-volume-42976435-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05924914s Feb 3 11:28:15.645: INFO: Pod "downwardapi-volume-42976435-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083265192s Feb 3 11:28:17.885: INFO: Pod "downwardapi-volume-42976435-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323142492s Feb 3 11:28:20.116: INFO: Pod "downwardapi-volume-42976435-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553792381s Feb 3 11:28:22.146: INFO: Pod "downwardapi-volume-42976435-4678-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.583700762s STEP: Saw pod success Feb 3 11:28:22.146: INFO: Pod "downwardapi-volume-42976435-4678-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:28:22.162: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-42976435-4678-11ea-ab15-0242ac110005 container client-container: STEP: delete the pod Feb 3 11:28:22.322: INFO: Waiting for pod downwardapi-volume-42976435-4678-11ea-ab15-0242ac110005 to disappear Feb 3 11:28:23.106: INFO: Pod downwardapi-volume-42976435-4678-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:28:23.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-688w7" for this suite. Feb 3 11:28:29.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:28:29.861: INFO: namespace: e2e-tests-projected-688w7, resource: bindings, ignored listing per whitelist Feb 3 11:28:29.999: INFO: namespace e2e-tests-projected-688w7 deletion completed in 6.874858419s • [SLOW TEST:18.641 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:28:30.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 3 11:28:30.224: INFO: Waiting up to 5m0s for pod "pod-4db53be2-4678-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-2r2vm" to be "success or failure" Feb 3 11:28:30.249: INFO: Pod "pod-4db53be2-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.70248ms Feb 3 11:28:32.273: INFO: Pod "pod-4db53be2-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049221784s Feb 3 11:28:34.296: INFO: Pod "pod-4db53be2-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072465333s Feb 3 11:28:36.450: INFO: Pod "pod-4db53be2-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.226082904s Feb 3 11:28:38.568: INFO: Pod "pod-4db53be2-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.344218254s Feb 3 11:28:40.615: INFO: Pod "pod-4db53be2-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.391604171s Feb 3 11:28:42.634: INFO: Pod "pod-4db53be2-4678-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.409749751s STEP: Saw pod success Feb 3 11:28:42.634: INFO: Pod "pod-4db53be2-4678-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:28:42.639: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4db53be2-4678-11ea-ab15-0242ac110005 container test-container: STEP: delete the pod Feb 3 11:28:42.699: INFO: Waiting for pod pod-4db53be2-4678-11ea-ab15-0242ac110005 to disappear Feb 3 11:28:42.719: INFO: Pod pod-4db53be2-4678-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:28:42.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2r2vm" for this suite. Feb 3 11:28:48.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:28:48.935: INFO: namespace: e2e-tests-emptydir-2r2vm, resource: bindings, ignored listing per whitelist Feb 3 11:28:49.090: INFO: namespace e2e-tests-emptydir-2r2vm deletion completed in 6.354280896s • [SLOW TEST:19.090 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:28:49.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 3 11:28:49.285: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59139f52-4678-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-bndqq" to be "success or failure" Feb 3 11:28:49.296: INFO: Pod "downwardapi-volume-59139f52-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.33794ms Feb 3 11:28:51.595: INFO: Pod "downwardapi-volume-59139f52-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.309781696s Feb 3 11:28:53.633: INFO: Pod "downwardapi-volume-59139f52-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34845432s Feb 3 11:28:55.651: INFO: Pod "downwardapi-volume-59139f52-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.366464324s Feb 3 11:28:57.854: INFO: Pod "downwardapi-volume-59139f52-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.568685805s Feb 3 11:28:59.887: INFO: Pod "downwardapi-volume-59139f52-4678-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.602009539s STEP: Saw pod success Feb 3 11:28:59.887: INFO: Pod "downwardapi-volume-59139f52-4678-11ea-ab15-0242ac110005" satisfied condition "success or failure" Feb 3 11:28:59.896: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-59139f52-4678-11ea-ab15-0242ac110005 container client-container: STEP: delete the pod Feb 3 11:29:00.164: INFO: Waiting for pod downwardapi-volume-59139f52-4678-11ea-ab15-0242ac110005 to disappear Feb 3 11:29:00.258: INFO: Pod downwardapi-volume-59139f52-4678-11ea-ab15-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:29:00.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bndqq" for this suite. Feb 3 11:29:06.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:29:06.463: INFO: namespace: e2e-tests-projected-bndqq, resource: bindings, ignored listing per whitelist Feb 3 11:29:06.570: INFO: namespace e2e-tests-projected-bndqq deletion completed in 6.302031432s • [SLOW TEST:17.480 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:29:06.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 3 11:29:06.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-mk98g' Feb 3 11:29:08.836: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 3 11:29:08.836: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Feb 3 11:29:08.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-mk98g' Feb 3 11:29:09.490: INFO: stderr: "" Feb 3 11:29:09.490: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:29:09.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mk98g" for this suite. Feb 3 11:29:15.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:29:15.676: INFO: namespace: e2e-tests-kubectl-mk98g, resource: bindings, ignored listing per whitelist Feb 3 11:29:15.763: INFO: namespace e2e-tests-kubectl-mk98g deletion completed in 6.249838799s • [SLOW TEST:9.192 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:29:15.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-68f18c8f-4678-11ea-ab15-0242ac110005 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-68f18c8f-4678-11ea-ab15-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 3 11:29:26.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pvjvv" for this suite. Feb 3 11:29:44.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 11:29:44.418: INFO: namespace: e2e-tests-configmap-pvjvv, resource: bindings, ignored listing per whitelist Feb 3 11:29:44.610: INFO: namespace e2e-tests-configmap-pvjvv deletion completed in 18.40225239s • [SLOW TEST:28.846 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 3 11:29:44.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 3 11:29:44.794: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 17.108283ms)
Feb  3 11:29:44.812: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.544614ms)
Feb  3 11:29:44.830: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.541215ms)
Feb  3 11:29:44.839: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.751707ms)
Feb  3 11:29:44.848: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.071957ms)
Feb  3 11:29:44.872: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.875456ms)
Feb  3 11:29:44.934: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 61.855228ms)
Feb  3 11:29:44.942: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.982244ms)
Feb  3 11:29:44.948: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.195977ms)
Feb  3 11:29:44.952: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.091531ms)
Feb  3 11:29:44.958: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.419496ms)
Feb  3 11:29:44.964: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.494359ms)
Feb  3 11:29:44.971: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.928832ms)
Feb  3 11:29:44.978: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.508634ms)
Feb  3 11:29:44.983: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.880071ms)
Feb  3 11:29:44.988: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.689389ms)
Feb  3 11:29:44.993: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.743361ms)
Feb  3 11:29:44.996: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.447427ms)
Feb  3 11:29:45.003: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.434974ms)
Feb  3 11:29:45.007: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.622785ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:29:45.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-mt67t" for this suite.
Feb  3 11:29:51.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:29:51.125: INFO: namespace: e2e-tests-proxy-mt67t, resource: bindings, ignored listing per whitelist
Feb  3 11:29:51.221: INFO: namespace e2e-tests-proxy-mt67t deletion completed in 6.209370471s

• [SLOW TEST:6.611 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:29:51.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  3 11:29:51.398: INFO: Waiting up to 5m0s for pod "pod-7e11e7f9-4678-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-k72vx" to be "success or failure"
Feb  3 11:29:51.457: INFO: Pod "pod-7e11e7f9-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 58.217389ms
Feb  3 11:29:53.563: INFO: Pod "pod-7e11e7f9-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164586964s
Feb  3 11:29:55.664: INFO: Pod "pod-7e11e7f9-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265516169s
Feb  3 11:29:57.841: INFO: Pod "pod-7e11e7f9-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442443828s
Feb  3 11:29:59.870: INFO: Pod "pod-7e11e7f9-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.471505687s
Feb  3 11:30:02.106: INFO: Pod "pod-7e11e7f9-4678-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.707299851s
STEP: Saw pod success
Feb  3 11:30:02.106: INFO: Pod "pod-7e11e7f9-4678-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:30:02.130: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7e11e7f9-4678-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 11:30:03.179: INFO: Waiting for pod pod-7e11e7f9-4678-11ea-ab15-0242ac110005 to disappear
Feb  3 11:30:03.212: INFO: Pod pod-7e11e7f9-4678-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:30:03.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k72vx" for this suite.
Feb  3 11:30:11.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:30:11.471: INFO: namespace: e2e-tests-emptydir-k72vx, resource: bindings, ignored listing per whitelist
Feb  3 11:30:11.553: INFO: namespace e2e-tests-emptydir-k72vx deletion completed in 8.331917318s

• [SLOW TEST:20.330 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:30:11.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  3 11:30:11.720: INFO: Waiting up to 5m0s for pod "pod-8a34830b-4678-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-hpbz4" to be "success or failure"
Feb  3 11:30:11.734: INFO: Pod "pod-8a34830b-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.280039ms
Feb  3 11:30:13.769: INFO: Pod "pod-8a34830b-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049200941s
Feb  3 11:30:15.783: INFO: Pod "pod-8a34830b-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063350382s
Feb  3 11:30:17.835: INFO: Pod "pod-8a34830b-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114943164s
Feb  3 11:30:20.177: INFO: Pod "pod-8a34830b-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.457077918s
Feb  3 11:30:22.203: INFO: Pod "pod-8a34830b-4678-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.482970544s
STEP: Saw pod success
Feb  3 11:30:22.203: INFO: Pod "pod-8a34830b-4678-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:30:22.215: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8a34830b-4678-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 11:30:22.420: INFO: Waiting for pod pod-8a34830b-4678-11ea-ab15-0242ac110005 to disappear
Feb  3 11:30:22.429: INFO: Pod pod-8a34830b-4678-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:30:22.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hpbz4" for this suite.
Feb  3 11:30:28.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:30:28.523: INFO: namespace: e2e-tests-emptydir-hpbz4, resource: bindings, ignored listing per whitelist
Feb  3 11:30:28.729: INFO: namespace e2e-tests-emptydir-hpbz4 deletion completed in 6.294225374s

• [SLOW TEST:17.176 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:30:28.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-m69fr
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-m69fr to expose endpoints map[]
Feb  3 11:30:29.142: INFO: Get endpoints failed (24.55583ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  3 11:30:30.161: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-m69fr exposes endpoints map[] (1.043363645s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-m69fr
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-m69fr to expose endpoints map[pod1:[100]]
Feb  3 11:30:34.315: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.127960193s elapsed, will retry)
Feb  3 11:30:37.681: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-m69fr exposes endpoints map[pod1:[100]] (7.493891161s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-m69fr
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-m69fr to expose endpoints map[pod1:[100] pod2:[101]]
Feb  3 11:30:42.325: INFO: Unexpected endpoints: found map[953811eb-4678-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.622338244s elapsed, will retry)
Feb  3 11:30:46.720: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-m69fr exposes endpoints map[pod2:[101] pod1:[100]] (9.017257525s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-m69fr
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-m69fr to expose endpoints map[pod2:[101]]
Feb  3 11:30:46.869: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-m69fr exposes endpoints map[pod2:[101]] (103.81399ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-m69fr
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-m69fr to expose endpoints map[]
Feb  3 11:30:48.257: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-m69fr exposes endpoints map[] (1.361679744s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:30:48.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-m69fr" for this suite.
Feb  3 11:31:12.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:31:12.910: INFO: namespace: e2e-tests-services-m69fr, resource: bindings, ignored listing per whitelist
Feb  3 11:31:13.040: INFO: namespace e2e-tests-services-m69fr deletion completed in 24.370966335s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:44.311 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:31:13.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  3 11:31:13.288: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aedc9739-4678-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-n4v84" to be "success or failure"
Feb  3 11:31:13.309: INFO: Pod "downwardapi-volume-aedc9739-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.037313ms
Feb  3 11:31:15.560: INFO: Pod "downwardapi-volume-aedc9739-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271362156s
Feb  3 11:31:17.582: INFO: Pod "downwardapi-volume-aedc9739-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293585464s
Feb  3 11:31:19.842: INFO: Pod "downwardapi-volume-aedc9739-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.554238385s
Feb  3 11:31:21.871: INFO: Pod "downwardapi-volume-aedc9739-4678-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.582508543s
Feb  3 11:31:23.907: INFO: Pod "downwardapi-volume-aedc9739-4678-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.618944112s
STEP: Saw pod success
Feb  3 11:31:23.907: INFO: Pod "downwardapi-volume-aedc9739-4678-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:31:23.912: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-aedc9739-4678-11ea-ab15-0242ac110005 container client-container: 
STEP: delete the pod
Feb  3 11:31:24.799: INFO: Waiting for pod downwardapi-volume-aedc9739-4678-11ea-ab15-0242ac110005 to disappear
Feb  3 11:31:24.988: INFO: Pod downwardapi-volume-aedc9739-4678-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:31:24.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-n4v84" for this suite.
Feb  3 11:31:31.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:31:31.348: INFO: namespace: e2e-tests-projected-n4v84, resource: bindings, ignored listing per whitelist
Feb  3 11:31:31.407: INFO: namespace e2e-tests-projected-n4v84 deletion completed in 6.391387818s

• [SLOW TEST:18.366 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:31:31.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb  3 11:31:31.653: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-r2mvz" to be "success or failure"
Feb  3 11:31:31.663: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.472607ms
Feb  3 11:31:33.675: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021252991s
Feb  3 11:31:35.693: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039006104s
Feb  3 11:31:37.706: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051937614s
Feb  3 11:31:39.748: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09466377s
Feb  3 11:31:43.480: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.826754338s
Feb  3 11:31:45.575: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.921249618s
Feb  3 11:31:47.959: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.305472303s
STEP: Saw pod success
Feb  3 11:31:47.959: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  3 11:31:47.970: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  3 11:31:48.319: INFO: Waiting for pod pod-host-path-test to disappear
Feb  3 11:31:48.342: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:31:48.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-r2mvz" for this suite.
Feb  3 11:31:54.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:31:54.734: INFO: namespace: e2e-tests-hostpath-r2mvz, resource: bindings, ignored listing per whitelist
Feb  3 11:31:54.761: INFO: namespace e2e-tests-hostpath-r2mvz deletion completed in 6.406845009s

• [SLOW TEST:23.354 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:31:54.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  3 11:31:54.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jwjml'
Feb  3 11:31:55.264: INFO: stderr: ""
Feb  3 11:31:55.264: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb  3 11:32:05.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jwjml -o json'
Feb  3 11:32:05.460: INFO: stderr: ""
Feb  3 11:32:05.460: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-03T11:31:55Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-jwjml\",\n        \"resourceVersion\": \"20410311\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-jwjml/pods/e2e-test-nginx-pod\",\n        \"uid\": \"c7e91b0b-4678-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-8wvk9\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-8wvk9\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-8wvk9\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-03T11:31:55Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-03T11:32:05Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-03T11:32:05Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-03T11:31:55Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://71b6b6c2cda9fddff0d51ff785a082a92319c0ad235040c9d1d0ae570a6c61cf\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-03T11:32:04Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-03T11:31:55Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  3 11:32:05.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-jwjml'
Feb  3 11:32:05.911: INFO: stderr: ""
Feb  3 11:32:05.911: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Feb  3 11:32:05.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jwjml'
Feb  3 11:32:13.300: INFO: stderr: ""
Feb  3 11:32:13.301: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:32:13.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jwjml" for this suite.
Feb  3 11:32:19.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:32:19.574: INFO: namespace: e2e-tests-kubectl-jwjml, resource: bindings, ignored listing per whitelist
Feb  3 11:32:19.582: INFO: namespace e2e-tests-kubectl-jwjml deletion completed in 6.231120983s

• [SLOW TEST:24.818 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:32:19.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-wh8hc
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-wh8hc
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-wh8hc
Feb  3 11:32:19.823: INFO: Found 0 stateful pods, waiting for 1
Feb  3 11:32:29.844: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb  3 11:32:39.842: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  3 11:32:39.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 11:32:40.651: INFO: stderr: "I0203 11:32:40.133492    2015 log.go:172] (0xc00015a840) (0xc000677400) Create stream\nI0203 11:32:40.133996    2015 log.go:172] (0xc00015a840) (0xc000677400) Stream added, broadcasting: 1\nI0203 11:32:40.139345    2015 log.go:172] (0xc00015a840) Reply frame received for 1\nI0203 11:32:40.139431    2015 log.go:172] (0xc00015a840) (0xc0006c6000) Create stream\nI0203 11:32:40.139441    2015 log.go:172] (0xc00015a840) (0xc0006c6000) Stream added, broadcasting: 3\nI0203 11:32:40.140551    2015 log.go:172] (0xc00015a840) Reply frame received for 3\nI0203 11:32:40.140582    2015 log.go:172] (0xc00015a840) (0xc0006c60a0) Create stream\nI0203 11:32:40.140596    2015 log.go:172] (0xc00015a840) (0xc0006c60a0) Stream added, broadcasting: 5\nI0203 11:32:40.141572    2015 log.go:172] (0xc00015a840) Reply frame received for 5\nI0203 11:32:40.396295    2015 log.go:172] (0xc00015a840) Data frame received for 3\nI0203 11:32:40.396434    2015 log.go:172] (0xc0006c6000) (3) Data frame handling\nI0203 11:32:40.396470    2015 log.go:172] (0xc0006c6000) (3) Data frame sent\nI0203 11:32:40.618639    2015 log.go:172] (0xc00015a840) (0xc0006c6000) Stream removed, broadcasting: 3\nI0203 11:32:40.619493    2015 log.go:172] (0xc00015a840) (0xc0006c60a0) Stream removed, broadcasting: 5\nI0203 11:32:40.620126    2015 log.go:172] (0xc00015a840) Data frame received for 1\nI0203 11:32:40.620306    2015 log.go:172] (0xc000677400) (1) Data frame handling\nI0203 11:32:40.620370    2015 log.go:172] (0xc000677400) (1) Data frame sent\nI0203 11:32:40.620569    2015 log.go:172] (0xc00015a840) (0xc000677400) Stream removed, broadcasting: 1\nI0203 11:32:40.620695    2015 log.go:172] (0xc00015a840) Go away received\nI0203 11:32:40.622466    2015 log.go:172] (0xc00015a840) (0xc000677400) Stream removed, broadcasting: 1\nI0203 11:32:40.622502    2015 log.go:172] (0xc00015a840) (0xc0006c6000) Stream removed, broadcasting: 3\nI0203 11:32:40.622532    2015 log.go:172] (0xc00015a840) (0xc0006c60a0) Stream removed, broadcasting: 5\n"
Feb  3 11:32:40.652: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 11:32:40.652: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 11:32:40.673: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  3 11:32:50.683: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 11:32:50.683: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 11:32:50.711: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  3 11:32:50.711: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:19 +0000 UTC  }]
Feb  3 11:32:50.712: INFO: 
Feb  3 11:32:50.712: INFO: StatefulSet ss has not reached scale 3, at 1
Feb  3 11:32:52.775: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990193225s
Feb  3 11:32:53.893: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.926528348s
Feb  3 11:32:55.010: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.80890086s
Feb  3 11:32:56.029: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.69116163s
Feb  3 11:32:57.100: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.672371222s
Feb  3 11:33:00.008: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.601766955s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-wh8hc
Feb  3 11:33:01.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:33:02.334: INFO: stderr: "I0203 11:33:01.570978    2037 log.go:172] (0xc000138840) (0xc00066b220) Create stream\nI0203 11:33:01.571363    2037 log.go:172] (0xc000138840) (0xc00066b220) Stream added, broadcasting: 1\nI0203 11:33:01.611198    2037 log.go:172] (0xc000138840) Reply frame received for 1\nI0203 11:33:01.611442    2037 log.go:172] (0xc000138840) (0xc000734000) Create stream\nI0203 11:33:01.611473    2037 log.go:172] (0xc000138840) (0xc000734000) Stream added, broadcasting: 3\nI0203 11:33:01.619511    2037 log.go:172] (0xc000138840) Reply frame received for 3\nI0203 11:33:01.619555    2037 log.go:172] (0xc000138840) (0xc0007340a0) Create stream\nI0203 11:33:01.619564    2037 log.go:172] (0xc000138840) (0xc0007340a0) Stream added, broadcasting: 5\nI0203 11:33:01.621923    2037 log.go:172] (0xc000138840) Reply frame received for 5\nI0203 11:33:02.085463    2037 log.go:172] (0xc000138840) Data frame received for 3\nI0203 11:33:02.085600    2037 log.go:172] (0xc000734000) (3) Data frame handling\nI0203 11:33:02.085626    2037 log.go:172] (0xc000734000) (3) Data frame sent\nI0203 11:33:02.324251    2037 log.go:172] (0xc000138840) (0xc000734000) Stream removed, broadcasting: 3\nI0203 11:33:02.324390    2037 log.go:172] (0xc000138840) Data frame received for 1\nI0203 11:33:02.324409    2037 log.go:172] (0xc00066b220) (1) Data frame handling\nI0203 11:33:02.324424    2037 log.go:172] (0xc00066b220) (1) Data frame sent\nI0203 11:33:02.324452    2037 log.go:172] (0xc000138840) (0xc00066b220) Stream removed, broadcasting: 1\nI0203 11:33:02.324577    2037 log.go:172] (0xc000138840) (0xc0007340a0) Stream removed, broadcasting: 5\nI0203 11:33:02.324606    2037 log.go:172] (0xc000138840) Go away received\nI0203 11:33:02.324986    2037 log.go:172] (0xc000138840) (0xc00066b220) Stream removed, broadcasting: 1\nI0203 11:33:02.325001    2037 log.go:172] (0xc000138840) (0xc000734000) Stream removed, broadcasting: 3\nI0203 11:33:02.325005    2037 log.go:172] (0xc000138840) (0xc0007340a0) Stream removed, broadcasting: 5\n"
Feb  3 11:33:02.335: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 11:33:02.335: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 11:33:02.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:33:02.960: INFO: stderr: "I0203 11:33:02.640055    2059 log.go:172] (0xc00013a6e0) (0xc0006fe5a0) Create stream\nI0203 11:33:02.640528    2059 log.go:172] (0xc00013a6e0) (0xc0006fe5a0) Stream added, broadcasting: 1\nI0203 11:33:02.647392    2059 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0203 11:33:02.647446    2059 log.go:172] (0xc00013a6e0) (0xc000536dc0) Create stream\nI0203 11:33:02.647462    2059 log.go:172] (0xc00013a6e0) (0xc000536dc0) Stream added, broadcasting: 3\nI0203 11:33:02.648759    2059 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0203 11:33:02.648775    2059 log.go:172] (0xc00013a6e0) (0xc0006fe640) Create stream\nI0203 11:33:02.648779    2059 log.go:172] (0xc00013a6e0) (0xc0006fe640) Stream added, broadcasting: 5\nI0203 11:33:02.649809    2059 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0203 11:33:02.793887    2059 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0203 11:33:02.794041    2059 log.go:172] (0xc000536dc0) (3) Data frame handling\nI0203 11:33:02.794067    2059 log.go:172] (0xc000536dc0) (3) Data frame sent\nI0203 11:33:02.794149    2059 log.go:172] (0xc00013a6e0) Data frame received for 5\nI0203 11:33:02.794163    2059 log.go:172] (0xc0006fe640) (5) Data frame handling\nI0203 11:33:02.794172    2059 log.go:172] (0xc0006fe640) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0203 11:33:02.949191    2059 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0203 11:33:02.949348    2059 log.go:172] (0xc00013a6e0) (0xc000536dc0) Stream removed, broadcasting: 3\nI0203 11:33:02.949414    2059 log.go:172] (0xc0006fe5a0) (1) Data frame handling\nI0203 11:33:02.949442    2059 log.go:172] (0xc0006fe5a0) (1) Data frame sent\nI0203 11:33:02.949451    2059 log.go:172] (0xc00013a6e0) (0xc0006fe5a0) Stream removed, broadcasting: 1\nI0203 11:33:02.949932    2059 log.go:172] (0xc00013a6e0) (0xc0006fe640) Stream removed, broadcasting: 5\nI0203 11:33:02.949964    2059 log.go:172] (0xc00013a6e0) (0xc0006fe5a0) Stream removed, broadcasting: 1\nI0203 11:33:02.949971    2059 log.go:172] (0xc00013a6e0) (0xc000536dc0) Stream removed, broadcasting: 3\nI0203 11:33:02.949976    2059 log.go:172] (0xc00013a6e0) (0xc0006fe640) Stream removed, broadcasting: 5\nI0203 11:33:02.950010    2059 log.go:172] (0xc00013a6e0) Go away received\n"
Feb  3 11:33:02.960: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 11:33:02.960: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 11:33:02.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:33:03.591: INFO: stderr: "I0203 11:33:03.275544    2081 log.go:172] (0xc00013a580) (0xc00041ebe0) Create stream\nI0203 11:33:03.275874    2081 log.go:172] (0xc00013a580) (0xc00041ebe0) Stream added, broadcasting: 1\nI0203 11:33:03.282791    2081 log.go:172] (0xc00013a580) Reply frame received for 1\nI0203 11:33:03.282889    2081 log.go:172] (0xc00013a580) (0xc0006de000) Create stream\nI0203 11:33:03.282903    2081 log.go:172] (0xc00013a580) (0xc0006de000) Stream added, broadcasting: 3\nI0203 11:33:03.283860    2081 log.go:172] (0xc00013a580) Reply frame received for 3\nI0203 11:33:03.283908    2081 log.go:172] (0xc00013a580) (0xc000560000) Create stream\nI0203 11:33:03.283918    2081 log.go:172] (0xc00013a580) (0xc000560000) Stream added, broadcasting: 5\nI0203 11:33:03.289158    2081 log.go:172] (0xc00013a580) Reply frame received for 5\nI0203 11:33:03.464176    2081 log.go:172] (0xc00013a580) Data frame received for 3\nI0203 11:33:03.464597    2081 log.go:172] (0xc0006de000) (3) Data frame handling\nI0203 11:33:03.464642    2081 log.go:172] (0xc0006de000) (3) Data frame sent\nI0203 11:33:03.464719    2081 log.go:172] (0xc00013a580) Data frame received for 5\nI0203 11:33:03.464772    2081 log.go:172] (0xc000560000) (5) Data frame handling\nI0203 11:33:03.464862    2081 log.go:172] (0xc000560000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0203 11:33:03.574189    2081 log.go:172] (0xc00013a580) Data frame received for 1\nI0203 11:33:03.574371    2081 log.go:172] (0xc00013a580) (0xc0006de000) Stream removed, broadcasting: 3\nI0203 11:33:03.574485    2081 log.go:172] (0xc00013a580) (0xc000560000) Stream removed, broadcasting: 5\nI0203 11:33:03.574638    2081 log.go:172] (0xc00041ebe0) (1) Data frame handling\nI0203 11:33:03.574669    2081 log.go:172] (0xc00041ebe0) (1) Data frame sent\nI0203 11:33:03.574709    2081 log.go:172] (0xc00013a580) (0xc00041ebe0) Stream removed, broadcasting: 1\nI0203 11:33:03.574761    2081 log.go:172] (0xc00013a580) Go away received\nI0203 11:33:03.575561    2081 log.go:172] (0xc00013a580) (0xc00041ebe0) Stream removed, broadcasting: 1\nI0203 11:33:03.575574    2081 log.go:172] (0xc00013a580) (0xc0006de000) Stream removed, broadcasting: 3\nI0203 11:33:03.575584    2081 log.go:172] (0xc00013a580) (0xc000560000) Stream removed, broadcasting: 5\n"
Feb  3 11:33:03.591: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 11:33:03.591: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 11:33:03.610: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 11:33:03.610: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Pending - Ready=false
Feb  3 11:33:13.669: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 11:33:13.669: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 11:33:13.669: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  3 11:33:13.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 11:33:14.571: INFO: stderr: "I0203 11:33:13.978779    2102 log.go:172] (0xc0007062c0) (0xc000728640) Create stream\nI0203 11:33:13.980060    2102 log.go:172] (0xc0007062c0) (0xc000728640) Stream added, broadcasting: 1\nI0203 11:33:14.091146    2102 log.go:172] (0xc0007062c0) Reply frame received for 1\nI0203 11:33:14.091438    2102 log.go:172] (0xc0007062c0) (0xc0007286e0) Create stream\nI0203 11:33:14.091459    2102 log.go:172] (0xc0007062c0) (0xc0007286e0) Stream added, broadcasting: 3\nI0203 11:33:14.098344    2102 log.go:172] (0xc0007062c0) Reply frame received for 3\nI0203 11:33:14.098383    2102 log.go:172] (0xc0007062c0) (0xc00069edc0) Create stream\nI0203 11:33:14.098396    2102 log.go:172] (0xc0007062c0) (0xc00069edc0) Stream added, broadcasting: 5\nI0203 11:33:14.118109    2102 log.go:172] (0xc0007062c0) Reply frame received for 5\nI0203 11:33:14.351909    2102 log.go:172] (0xc0007062c0) Data frame received for 3\nI0203 11:33:14.352001    2102 log.go:172] (0xc0007286e0) (3) Data frame handling\nI0203 11:33:14.352017    2102 log.go:172] (0xc0007286e0) (3) Data frame sent\nI0203 11:33:14.556399    2102 log.go:172] (0xc0007062c0) (0xc0007286e0) Stream removed, broadcasting: 3\nI0203 11:33:14.556651    2102 log.go:172] (0xc0007062c0) Data frame received for 1\nI0203 11:33:14.556883    2102 log.go:172] (0xc0007062c0) (0xc00069edc0) Stream removed, broadcasting: 5\nI0203 11:33:14.556953    2102 log.go:172] (0xc000728640) (1) Data frame handling\nI0203 11:33:14.556988    2102 log.go:172] (0xc000728640) (1) Data frame sent\nI0203 11:33:14.556998    2102 log.go:172] (0xc0007062c0) (0xc000728640) Stream removed, broadcasting: 1\nI0203 11:33:14.557051    2102 log.go:172] (0xc0007062c0) Go away received\nI0203 11:33:14.557899    2102 log.go:172] (0xc0007062c0) (0xc000728640) Stream removed, broadcasting: 1\nI0203 11:33:14.557920    2102 log.go:172] (0xc0007062c0) (0xc0007286e0) Stream removed, broadcasting: 3\nI0203 11:33:14.557930    2102 log.go:172] (0xc0007062c0) (0xc00069edc0) Stream removed, broadcasting: 5\n"
Feb  3 11:33:14.571: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 11:33:14.571: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 11:33:14.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 11:33:14.986: INFO: stderr: "I0203 11:33:14.721502    2124 log.go:172] (0xc0006de2c0) (0xc00071e640) Create stream\nI0203 11:33:14.721793    2124 log.go:172] (0xc0006de2c0) (0xc00071e640) Stream added, broadcasting: 1\nI0203 11:33:14.725762    2124 log.go:172] (0xc0006de2c0) Reply frame received for 1\nI0203 11:33:14.725800    2124 log.go:172] (0xc0006de2c0) (0xc00071e6e0) Create stream\nI0203 11:33:14.725809    2124 log.go:172] (0xc0006de2c0) (0xc00071e6e0) Stream added, broadcasting: 3\nI0203 11:33:14.726758    2124 log.go:172] (0xc0006de2c0) Reply frame received for 3\nI0203 11:33:14.726818    2124 log.go:172] (0xc0006de2c0) (0xc00067ebe0) Create stream\nI0203 11:33:14.726849    2124 log.go:172] (0xc0006de2c0) (0xc00067ebe0) Stream added, broadcasting: 5\nI0203 11:33:14.727640    2124 log.go:172] (0xc0006de2c0) Reply frame received for 5\nI0203 11:33:14.842938    2124 log.go:172] (0xc0006de2c0) Data frame received for 3\nI0203 11:33:14.843467    2124 log.go:172] (0xc00071e6e0) (3) Data frame handling\nI0203 11:33:14.843593    2124 log.go:172] (0xc00071e6e0) (3) Data frame sent\nI0203 11:33:14.966005    2124 log.go:172] (0xc0006de2c0) Data frame received for 1\nI0203 11:33:14.966103    2124 log.go:172] (0xc00071e640) (1) Data frame handling\nI0203 11:33:14.966122    2124 log.go:172] (0xc00071e640) (1) Data frame sent\nI0203 11:33:14.973734    2124 log.go:172] (0xc0006de2c0) (0xc00071e640) Stream removed, broadcasting: 1\nI0203 11:33:14.974226    2124 log.go:172] (0xc0006de2c0) (0xc00071e6e0) Stream removed, broadcasting: 3\nI0203 11:33:14.974298    2124 log.go:172] (0xc0006de2c0) (0xc00067ebe0) Stream removed, broadcasting: 5\nI0203 11:33:14.974360    2124 log.go:172] (0xc0006de2c0) Go away received\nI0203 11:33:14.974717    2124 log.go:172] (0xc0006de2c0) (0xc00071e640) Stream removed, broadcasting: 1\nI0203 11:33:14.974750    2124 log.go:172] (0xc0006de2c0) (0xc00071e6e0) Stream removed, broadcasting: 3\nI0203 11:33:14.974767    2124 log.go:172] (0xc0006de2c0) (0xc00067ebe0) Stream removed, broadcasting: 5\n"
Feb  3 11:33:14.986: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 11:33:14.986: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 11:33:14.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 11:33:15.791: INFO: stderr: "I0203 11:33:15.465022    2147 log.go:172] (0xc00014c790) (0xc000752640) Create stream\nI0203 11:33:15.465598    2147 log.go:172] (0xc00014c790) (0xc000752640) Stream added, broadcasting: 1\nI0203 11:33:15.473259    2147 log.go:172] (0xc00014c790) Reply frame received for 1\nI0203 11:33:15.473334    2147 log.go:172] (0xc00014c790) (0xc000594dc0) Create stream\nI0203 11:33:15.473385    2147 log.go:172] (0xc00014c790) (0xc000594dc0) Stream added, broadcasting: 3\nI0203 11:33:15.474809    2147 log.go:172] (0xc00014c790) Reply frame received for 3\nI0203 11:33:15.474842    2147 log.go:172] (0xc00014c790) (0xc000594f00) Create stream\nI0203 11:33:15.474851    2147 log.go:172] (0xc00014c790) (0xc000594f00) Stream added, broadcasting: 5\nI0203 11:33:15.476355    2147 log.go:172] (0xc00014c790) Reply frame received for 5\nI0203 11:33:15.658800    2147 log.go:172] (0xc00014c790) Data frame received for 3\nI0203 11:33:15.658865    2147 log.go:172] (0xc000594dc0) (3) Data frame handling\nI0203 11:33:15.658895    2147 log.go:172] (0xc000594dc0) (3) Data frame sent\nI0203 11:33:15.776282    2147 log.go:172] (0xc00014c790) Data frame received for 1\nI0203 11:33:15.776449    2147 log.go:172] (0xc00014c790) (0xc000594dc0) Stream removed, broadcasting: 3\nI0203 11:33:15.776587    2147 log.go:172] (0xc00014c790) (0xc000594f00) Stream removed, broadcasting: 5\nI0203 11:33:15.776651    2147 log.go:172] (0xc000752640) (1) Data frame handling\nI0203 11:33:15.776733    2147 log.go:172] (0xc000752640) (1) Data frame sent\nI0203 11:33:15.776756    2147 log.go:172] (0xc00014c790) (0xc000752640) Stream removed, broadcasting: 1\nI0203 11:33:15.776790    2147 log.go:172] (0xc00014c790) Go away received\nI0203 11:33:15.778206    2147 log.go:172] (0xc00014c790) (0xc000752640) Stream removed, broadcasting: 1\nI0203 11:33:15.778220    2147 log.go:172] (0xc00014c790) (0xc000594dc0) Stream removed, broadcasting: 3\nI0203 11:33:15.778231    2147 log.go:172] (0xc00014c790) (0xc000594f00) Stream removed, broadcasting: 5\n"
Feb  3 11:33:15.791: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 11:33:15.791: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 11:33:15.791: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 11:33:15.811: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  3 11:33:25.846: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 11:33:25.846: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 11:33:25.846: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 11:33:25.888: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  3 11:33:25.888: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:19 +0000 UTC  }]
Feb  3 11:33:25.889: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:25.889: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:25.889: INFO: 
Feb  3 11:33:25.889: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 11:33:26.926: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  3 11:33:26.926: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:19 +0000 UTC  }]
Feb  3 11:33:26.926: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:26.927: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:26.927: INFO: 
Feb  3 11:33:26.927: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 11:33:28.191: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  3 11:33:28.191: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:19 +0000 UTC  }]
Feb  3 11:33:28.191: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:28.191: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:28.191: INFO: 
Feb  3 11:33:28.191: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 11:33:29.290: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  3 11:33:29.290: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:19 +0000 UTC  }]
Feb  3 11:33:29.290: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:29.290: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:29.290: INFO: 
Feb  3 11:33:29.290: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 11:33:30.304: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  3 11:33:30.304: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:19 +0000 UTC  }]
Feb  3 11:33:30.304: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:30.304: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:30.304: INFO: 
Feb  3 11:33:30.304: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 11:33:31.376: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  3 11:33:31.376: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:19 +0000 UTC  }]
Feb  3 11:33:31.376: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:31.377: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:31.377: INFO: 
Feb  3 11:33:31.377: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 11:33:32.958: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  3 11:33:32.958: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:19 +0000 UTC  }]
Feb  3 11:33:32.959: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:32.959: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:32.959: INFO: 
Feb  3 11:33:32.959: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 11:33:33.984: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  3 11:33:33.984: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:19 +0000 UTC  }]
Feb  3 11:33:33.984: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:33.984: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:33.984: INFO: 
Feb  3 11:33:33.984: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 11:33:35.051: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  3 11:33:35.051: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:19 +0000 UTC  }]
Feb  3 11:33:35.051: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:33:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 11:32:50 +0000 UTC  }]
Feb  3 11:33:35.051: INFO: 
Feb  3 11:33:35.051: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-wh8hc
Feb  3 11:33:36.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:33:36.347: INFO: rc: 1
Feb  3 11:33:36.348: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0011e9b90 exit status 1   true [0xc0000e90e0 0xc0000e9190 0xc0000e9230] [0xc0000e90e0 0xc0000e9190 0xc0000e9230] [0xc0000e9188 0xc0000e9200] [0x935700 0x935700] 0xc0018b20c0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb  3 11:33:46.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:33:46.581: INFO: rc: 1
Feb  3 11:33:46.582: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010a02d0 exit status 1   true [0xc000fa01b8 0xc000fa01d0 0xc000fa01e8] [0xc000fa01b8 0xc000fa01d0 0xc000fa01e8] [0xc000fa01c8 0xc000fa01e0] [0x935700 0x935700] 0xc001ea99e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:33:56.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:33:56.752: INFO: rc: 1
Feb  3 11:33:56.753: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010a03f0 exit status 1   true [0xc000fa01f0 0xc000fa0208 0xc000fa0220] [0xc000fa01f0 0xc000fa0208 0xc000fa0220] [0xc000fa0200 0xc000fa0218] [0x935700 0x935700] 0xc001ea9ce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:34:06.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:34:06.940: INFO: rc: 1
Feb  3 11:34:06.940: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e09410 exit status 1   true [0xc00000e338 0xc00000e400 0xc00000e430] [0xc00000e338 0xc00000e400 0xc00000e430] [0xc00000e378 0xc00000e420] [0x935700 0x935700] 0xc0023c4660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:34:16.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:34:17.062: INFO: rc: 1
Feb  3 11:34:17.063: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e09530 exit status 1   true [0xc00000e510 0xc00000e978 0xc00000ea70] [0xc00000e510 0xc00000e978 0xc00000ea70] [0xc00000e930 0xc00000e9c8] [0x935700 0x935700] 0xc0023c49c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:34:27.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:34:27.294: INFO: rc: 1
Feb  3 11:34:27.294: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010a08a0 exit status 1   true [0xc000fa0228 0xc000fa0240 0xc000fa0258] [0xc000fa0228 0xc000fa0240 0xc000fa0258] [0xc000fa0238 0xc000fa0250] [0x935700 0x935700] 0xc001ea9f80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:34:37.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:34:37.460: INFO: rc: 1
Feb  3 11:34:37.460: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011e9d10 exit status 1   true [0xc0000e9248 0xc0000e9298 0xc0000e9318] [0xc0000e9248 0xc0000e9298 0xc0000e9318] [0xc0000e9288 0xc0000e92d0] [0x935700 0x935700] 0xc0018b2c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:34:47.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:34:47.613: INFO: rc: 1
Feb  3 11:34:47.613: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010a0b10 exit status 1   true [0xc000fa0260 0xc000fa0278 0xc000fa0290] [0xc000fa0260 0xc000fa0278 0xc000fa0290] [0xc000fa0270 0xc000fa0288] [0x935700 0x935700] 0xc001f0a2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:34:57.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:34:57.737: INFO: rc: 1
Feb  3 11:34:57.737: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e09680 exit status 1   true [0xc00000eaa8 0xc00000eb48 0xc00000ed28] [0xc00000eaa8 0xc00000eb48 0xc00000ed28] [0xc00000eb30 0xc00000ec88] [0x935700 0x935700] 0xc0023c4d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:35:07.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:35:07.909: INFO: rc: 1
Feb  3 11:35:07.910: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010a0c90 exit status 1   true [0xc000fa0298 0xc000fa02b8 0xc000fa02d0] [0xc000fa0298 0xc000fa02b8 0xc000fa02d0] [0xc000fa02b0 0xc000fa02c8] [0x935700 0x935700] 0xc001f0a540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:35:17.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:35:18.057: INFO: rc: 1
Feb  3 11:35:18.057: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000193290 exit status 1   true [0xc0000e8248 0xc0000e8a10 0xc0000e8b80] [0xc0000e8248 0xc0000e8a10 0xc0000e8b80] [0xc0000e82e8 0xc0000e8ad0] [0x935700 0x935700] 0xc0019d41e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:35:28.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:35:28.217: INFO: rc: 1
Feb  3 11:35:28.218: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001528210 exit status 1   true [0xc000a68000 0xc000a68018 0xc000a68038] [0xc000a68000 0xc000a68018 0xc000a68038] [0xc000a68010 0xc000a68028] [0x935700 0x935700] 0xc001ea81e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:35:38.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:35:38.447: INFO: rc: 1
Feb  3 11:35:38.448: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000193470 exit status 1   true [0xc0000e8bb8 0xc0000e8c58 0xc0000e8d30] [0xc0000e8bb8 0xc0000e8c58 0xc0000e8d30] [0xc0000e8c00 0xc0000e8d28] [0x935700 0x935700] 0xc0019d4660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:35:48.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:35:48.639: INFO: rc: 1
Feb  3 11:35:48.640: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001fae120 exit status 1   true [0xc00000e010 0xc00000e1e0 0xc00000e348] [0xc00000e010 0xc00000e1e0 0xc00000e348] [0xc00000e070 0xc00000e338] [0x935700 0x935700] 0xc0018b2900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:35:58.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:35:58.915: INFO: rc: 1
Feb  3 11:35:58.916: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011e8120 exit status 1   true [0xc000fa0000 0xc000fa0018 0xc000fa0030] [0xc000fa0000 0xc000fa0018 0xc000fa0030] [0xc000fa0010 0xc000fa0028] [0x935700 0x935700] 0xc001616600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:36:08.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:36:09.047: INFO: rc: 1
Feb  3 11:36:09.047: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011e8270 exit status 1   true [0xc000fa0038 0xc000fa0050 0xc000fa0068] [0xc000fa0038 0xc000fa0050 0xc000fa0068] [0xc000fa0048 0xc000fa0060] [0x935700 0x935700] 0xc001616b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:36:19.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:36:19.545: INFO: rc: 1
Feb  3 11:36:19.546: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001fae3c0 exit status 1   true [0xc00000e378 0xc00000e420 0xc00000e918] [0xc00000e378 0xc00000e420 0xc00000e918] [0xc00000e418 0xc00000e510] [0x935700 0x935700] 0xc0018b30e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:36:29.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:36:29.721: INFO: rc: 1
Feb  3 11:36:29.721: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001fae540 exit status 1   true [0xc00000e930 0xc00000e9c8 0xc00000eab8] [0xc00000e930 0xc00000e9c8 0xc00000eab8] [0xc00000e988 0xc00000eaa8] [0x935700 0x935700] 0xc0018b33e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:36:39.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:36:39.913: INFO: rc: 1
Feb  3 11:36:39.913: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011e8390 exit status 1   true [0xc000fa0070 0xc000fa0088 0xc000fa00a0] [0xc000fa0070 0xc000fa0088 0xc000fa00a0] [0xc000fa0080 0xc000fa0098] [0x935700 0x935700] 0xc001616de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:36:49.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:36:50.104: INFO: rc: 1
Feb  3 11:36:50.104: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000193620 exit status 1   true [0xc0000e8d38 0xc0000e8de8 0xc0000e8e58] [0xc0000e8d38 0xc0000e8de8 0xc0000e8e58] [0xc0000e8dc8 0xc0000e8e30] [0x935700 0x935700] 0xc0019d4a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:37:00.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:37:00.246: INFO: rc: 1
Feb  3 11:37:00.247: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011e8510 exit status 1   true [0xc000fa00a8 0xc000fa00c0 0xc000fa00d8] [0xc000fa00a8 0xc000fa00c0 0xc000fa00d8] [0xc000fa00b8 0xc000fa00d0] [0x935700 0x935700] 0xc001617080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:37:10.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:37:10.384: INFO: rc: 1
Feb  3 11:37:10.384: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011e8750 exit status 1   true [0xc000fa00e0 0xc000fa00f8 0xc000fa0110] [0xc000fa00e0 0xc000fa00f8 0xc000fa0110] [0xc000fa00f0 0xc000fa0108] [0x935700 0x935700] 0xc0016177a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:37:20.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:37:20.677: INFO: rc: 1
Feb  3 11:37:20.677: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0001932c0 exit status 1   true [0xc0000e8248 0xc0000e8a10 0xc0000e8b80] [0xc0000e8248 0xc0000e8a10 0xc0000e8b80] [0xc0000e82e8 0xc0000e8ad0] [0x935700 0x935700] 0xc0019d4180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:37:30.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:37:30.885: INFO: rc: 1
Feb  3 11:37:30.886: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001528390 exit status 1   true [0xc000a68000 0xc000a68018 0xc000a68038] [0xc000a68000 0xc000a68018 0xc000a68038] [0xc000a68010 0xc000a68028] [0x935700 0x935700] 0xc001ea8240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:37:40.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:37:41.043: INFO: rc: 1
Feb  3 11:37:41.043: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0015286c0 exit status 1   true [0xc000a68040 0xc000a68058 0xc000a68070] [0xc000a68040 0xc000a68058 0xc000a68070] [0xc000a68050 0xc000a68068] [0x935700 0x935700] 0xc001ea84e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:37:51.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:37:51.207: INFO: rc: 1
Feb  3 11:37:51.208: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001fae150 exit status 1   true [0xc00000e010 0xc00000e1e0 0xc00000e348] [0xc00000e010 0xc00000e1e0 0xc00000e348] [0xc00000e070 0xc00000e338] [0x935700 0x935700] 0xc0018b2900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:38:01.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:38:01.427: INFO: rc: 1
Feb  3 11:38:01.427: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000193560 exit status 1   true [0xc0000e8bb8 0xc0000e8c58 0xc0000e8d30] [0xc0000e8bb8 0xc0000e8c58 0xc0000e8d30] [0xc0000e8c00 0xc0000e8d28] [0x935700 0x935700] 0xc0019d4600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:38:11.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:38:11.576: INFO: rc: 1
Feb  3 11:38:11.576: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001fae300 exit status 1   true [0xc00000e378 0xc00000e420 0xc00000e918] [0xc00000e378 0xc00000e420 0xc00000e918] [0xc00000e418 0xc00000e510] [0x935700 0x935700] 0xc0018b30e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:38:21.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:38:21.750: INFO: rc: 1
Feb  3 11:38:21.751: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001fae570 exit status 1   true [0xc00000e930 0xc00000e9c8 0xc00000eab8] [0xc00000e930 0xc00000e9c8 0xc00000eab8] [0xc00000e988 0xc00000eaa8] [0x935700 0x935700] 0xc0018b33e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:38:31.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:38:31.939: INFO: rc: 1
Feb  3 11:38:31.940: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001528990 exit status 1   true [0xc000a68078 0xc000a68090 0xc000a680a8] [0xc000a68078 0xc000a68090 0xc000a680a8] [0xc000a68088 0xc000a680a0] [0x935700 0x935700] 0xc001ea9020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  3 11:38:41.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wh8hc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:38:42.156: INFO: rc: 1
Feb  3 11:38:42.156: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb  3 11:38:42.156: INFO: Scaling statefulset ss to 0
Feb  3 11:38:42.341: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  3 11:38:42.346: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wh8hc
Feb  3 11:38:42.361: INFO: Scaling statefulset ss to 0
Feb  3 11:38:42.394: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 11:38:42.399: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:38:42.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-wh8hc" for this suite.
Feb  3 11:38:50.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:38:51.056: INFO: namespace: e2e-tests-statefulset-wh8hc, resource: bindings, ignored listing per whitelist
Feb  3 11:38:51.134: INFO: namespace e2e-tests-statefulset-wh8hc deletion completed in 8.570133909s

• [SLOW TEST:391.552 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:38:51.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  3 11:38:51.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-lj88b'
Feb  3 11:38:51.661: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 11:38:51.661: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Feb  3 11:38:53.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-lj88b'
Feb  3 11:38:54.288: INFO: stderr: ""
Feb  3 11:38:54.288: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:38:54.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lj88b" for this suite.
Feb  3 11:39:00.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:39:00.904: INFO: namespace: e2e-tests-kubectl-lj88b, resource: bindings, ignored listing per whitelist
Feb  3 11:39:00.919: INFO: namespace e2e-tests-kubectl-lj88b deletion completed in 6.590252308s

• [SLOW TEST:9.785 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:39:00.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-tzq9h
Feb  3 11:39:11.335: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-tzq9h
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 11:39:11.344: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:43:13.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-tzq9h" for this suite.
Feb  3 11:43:21.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:43:21.387: INFO: namespace: e2e-tests-container-probe-tzq9h, resource: bindings, ignored listing per whitelist
Feb  3 11:43:21.454: INFO: namespace e2e-tests-container-probe-tzq9h deletion completed in 8.30596863s

• [SLOW TEST:260.534 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:43:21.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-s6qjp
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  3 11:43:21.644: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  3 11:43:55.877: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-s6qjp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 11:43:55.877: INFO: >>> kubeConfig: /root/.kube/config
I0203 11:43:55.971854       8 log.go:172] (0xc000ffe790) (0xc00210bf40) Create stream
I0203 11:43:55.972058       8 log.go:172] (0xc000ffe790) (0xc00210bf40) Stream added, broadcasting: 1
I0203 11:43:55.978586       8 log.go:172] (0xc000ffe790) Reply frame received for 1
I0203 11:43:55.978677       8 log.go:172] (0xc000ffe790) (0xc001bdcaa0) Create stream
I0203 11:43:55.978700       8 log.go:172] (0xc000ffe790) (0xc001bdcaa0) Stream added, broadcasting: 3
I0203 11:43:55.980071       8 log.go:172] (0xc000ffe790) Reply frame received for 3
I0203 11:43:55.980115       8 log.go:172] (0xc000ffe790) (0xc001b47680) Create stream
I0203 11:43:55.980128       8 log.go:172] (0xc000ffe790) (0xc001b47680) Stream added, broadcasting: 5
I0203 11:43:55.981457       8 log.go:172] (0xc000ffe790) Reply frame received for 5
I0203 11:43:56.146241       8 log.go:172] (0xc000ffe790) Data frame received for 3
I0203 11:43:56.146367       8 log.go:172] (0xc001bdcaa0) (3) Data frame handling
I0203 11:43:56.146394       8 log.go:172] (0xc001bdcaa0) (3) Data frame sent
I0203 11:43:56.302109       8 log.go:172] (0xc000ffe790) Data frame received for 1
I0203 11:43:56.302268       8 log.go:172] (0xc000ffe790) (0xc001bdcaa0) Stream removed, broadcasting: 3
I0203 11:43:56.302343       8 log.go:172] (0xc00210bf40) (1) Data frame handling
I0203 11:43:56.302388       8 log.go:172] (0xc00210bf40) (1) Data frame sent
I0203 11:43:56.302408       8 log.go:172] (0xc000ffe790) (0xc00210bf40) Stream removed, broadcasting: 1
I0203 11:43:56.304444       8 log.go:172] (0xc000ffe790) (0xc001b47680) Stream removed, broadcasting: 5
I0203 11:43:56.304492       8 log.go:172] (0xc000ffe790) (0xc00210bf40) Stream removed, broadcasting: 1
I0203 11:43:56.304512       8 log.go:172] (0xc000ffe790) (0xc001bdcaa0) Stream removed, broadcasting: 3
I0203 11:43:56.304526       8 log.go:172] (0xc000ffe790) (0xc001b47680) Stream removed, broadcasting: 5
Feb  3 11:43:56.305: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:43:56.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0203 11:43:56.305655       8 log.go:172] (0xc000ffe790) Go away received
STEP: Destroying namespace "e2e-tests-pod-network-test-s6qjp" for this suite.
Feb  3 11:44:20.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:44:20.448: INFO: namespace: e2e-tests-pod-network-test-s6qjp, resource: bindings, ignored listing per whitelist
Feb  3 11:44:20.744: INFO: namespace e2e-tests-pod-network-test-s6qjp deletion completed in 24.410916536s

• [SLOW TEST:59.289 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:44:20.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:44:34.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-n745z" for this suite.
Feb  3 11:44:58.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:44:58.432: INFO: namespace: e2e-tests-replication-controller-n745z, resource: bindings, ignored listing per whitelist
Feb  3 11:44:58.661: INFO: namespace e2e-tests-replication-controller-n745z deletion completed in 24.471963498s

• [SLOW TEST:37.916 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:44:58.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb  3 11:45:09.040: INFO: Pod pod-hostip-9b097fcc-467a-11ea-ab15-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:45:09.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-lwt6x" for this suite.
Feb  3 11:45:33.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:45:33.232: INFO: namespace: e2e-tests-pods-lwt6x, resource: bindings, ignored listing per whitelist
Feb  3 11:45:33.256: INFO: namespace e2e-tests-pods-lwt6x deletion completed in 24.208779723s

• [SLOW TEST:34.595 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:45:33.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  3 11:45:33.574: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afa8eeb3-467a-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-5kmsv" to be "success or failure"
Feb  3 11:45:33.624: INFO: Pod "downwardapi-volume-afa8eeb3-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.122986ms
Feb  3 11:45:35.701: INFO: Pod "downwardapi-volume-afa8eeb3-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127376178s
Feb  3 11:45:37.737: INFO: Pod "downwardapi-volume-afa8eeb3-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163580016s
Feb  3 11:45:39.757: INFO: Pod "downwardapi-volume-afa8eeb3-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183507742s
Feb  3 11:45:41.776: INFO: Pod "downwardapi-volume-afa8eeb3-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.202229472s
Feb  3 11:45:43.796: INFO: Pod "downwardapi-volume-afa8eeb3-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221711137s
Feb  3 11:45:45.911: INFO: Pod "downwardapi-volume-afa8eeb3-467a-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.337060311s
STEP: Saw pod success
Feb  3 11:45:45.911: INFO: Pod "downwardapi-volume-afa8eeb3-467a-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:45:45.928: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-afa8eeb3-467a-11ea-ab15-0242ac110005 container client-container: 
STEP: delete the pod
Feb  3 11:45:46.024: INFO: Waiting for pod downwardapi-volume-afa8eeb3-467a-11ea-ab15-0242ac110005 to disappear
Feb  3 11:45:46.038: INFO: Pod downwardapi-volume-afa8eeb3-467a-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:45:46.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5kmsv" for this suite.
Feb  3 11:45:52.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:45:52.313: INFO: namespace: e2e-tests-downward-api-5kmsv, resource: bindings, ignored listing per whitelist
Feb  3 11:45:52.334: INFO: namespace e2e-tests-downward-api-5kmsv deletion completed in 6.269636944s

• [SLOW TEST:19.077 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:45:52.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:46:04.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-xjws2" for this suite.
Feb  3 11:46:54.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:46:55.022: INFO: namespace: e2e-tests-kubelet-test-xjws2, resource: bindings, ignored listing per whitelist
Feb  3 11:46:55.143: INFO: namespace e2e-tests-kubelet-test-xjws2 deletion completed in 50.285687392s

• [SLOW TEST:62.808 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:46:55.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-e0654a86-467a-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  3 11:46:55.431: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e07627ce-467a-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-r99rz" to be "success or failure"
Feb  3 11:46:55.470: INFO: Pod "pod-projected-secrets-e07627ce-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.518967ms
Feb  3 11:46:57.486: INFO: Pod "pod-projected-secrets-e07627ce-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055706107s
Feb  3 11:46:59.499: INFO: Pod "pod-projected-secrets-e07627ce-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068244901s
Feb  3 11:47:01.876: INFO: Pod "pod-projected-secrets-e07627ce-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445693237s
Feb  3 11:47:04.198: INFO: Pod "pod-projected-secrets-e07627ce-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.767239729s
Feb  3 11:47:06.830: INFO: Pod "pod-projected-secrets-e07627ce-467a-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.399465321s
STEP: Saw pod success
Feb  3 11:47:06.830: INFO: Pod "pod-projected-secrets-e07627ce-467a-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:47:06.840: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e07627ce-467a-11ea-ab15-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  3 11:47:07.305: INFO: Waiting for pod pod-projected-secrets-e07627ce-467a-11ea-ab15-0242ac110005 to disappear
Feb  3 11:47:07.358: INFO: Pod pod-projected-secrets-e07627ce-467a-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:47:07.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-r99rz" for this suite.
Feb  3 11:47:13.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:47:13.668: INFO: namespace: e2e-tests-projected-r99rz, resource: bindings, ignored listing per whitelist
Feb  3 11:47:13.866: INFO: namespace e2e-tests-projected-r99rz deletion completed in 6.494777528s

• [SLOW TEST:18.723 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:47:13.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  3 11:47:14.439: INFO: Waiting up to 5m0s for pod "downward-api-ebb0be17-467a-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-wkkpj" to be "success or failure"
Feb  3 11:47:14.453: INFO: Pod "downward-api-ebb0be17-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.731663ms
Feb  3 11:47:16.739: INFO: Pod "downward-api-ebb0be17-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299559398s
Feb  3 11:47:18.753: INFO: Pod "downward-api-ebb0be17-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313761176s
Feb  3 11:47:20.913: INFO: Pod "downward-api-ebb0be17-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.473750337s
Feb  3 11:47:23.101: INFO: Pod "downward-api-ebb0be17-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.662230727s
Feb  3 11:47:25.270: INFO: Pod "downward-api-ebb0be17-467a-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.83070512s
Feb  3 11:47:27.288: INFO: Pod "downward-api-ebb0be17-467a-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.848846655s
STEP: Saw pod success
Feb  3 11:47:27.288: INFO: Pod "downward-api-ebb0be17-467a-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:47:27.294: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-ebb0be17-467a-11ea-ab15-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  3 11:47:27.901: INFO: Waiting for pod downward-api-ebb0be17-467a-11ea-ab15-0242ac110005 to disappear
Feb  3 11:47:27.917: INFO: Pod downward-api-ebb0be17-467a-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:47:27.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wkkpj" for this suite.
Feb  3 11:47:36.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:47:36.137: INFO: namespace: e2e-tests-downward-api-wkkpj, resource: bindings, ignored listing per whitelist
Feb  3 11:47:36.289: INFO: namespace e2e-tests-downward-api-wkkpj deletion completed in 8.345411593s

• [SLOW TEST:22.421 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:47:36.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0203 11:47:49.337204       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 11:47:49.337: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:47:49.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-fv7bb" for this suite.
Feb  3 11:48:12.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:48:12.942: INFO: namespace: e2e-tests-gc-fv7bb, resource: bindings, ignored listing per whitelist
Feb  3 11:48:12.969: INFO: namespace e2e-tests-gc-fv7bb deletion completed in 23.626745204s

• [SLOW TEST:36.679 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:48:12.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-10540fa9-467b-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  3 11:48:16.422: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-10b7afc9-467b-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-bxdl2" to be "success or failure"
Feb  3 11:48:16.689: INFO: Pod "pod-projected-secrets-10b7afc9-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 266.822667ms
Feb  3 11:48:18.711: INFO: Pod "pod-projected-secrets-10b7afc9-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288848178s
Feb  3 11:48:20.910: INFO: Pod "pod-projected-secrets-10b7afc9-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488330999s
Feb  3 11:48:22.933: INFO: Pod "pod-projected-secrets-10b7afc9-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.511169345s
Feb  3 11:48:26.000: INFO: Pod "pod-projected-secrets-10b7afc9-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.578441637s
Feb  3 11:48:28.018: INFO: Pod "pod-projected-secrets-10b7afc9-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.595823473s
Feb  3 11:48:30.028: INFO: Pod "pod-projected-secrets-10b7afc9-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.606461392s
Feb  3 11:48:32.047: INFO: Pod "pod-projected-secrets-10b7afc9-467b-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.624648543s
STEP: Saw pod success
Feb  3 11:48:32.047: INFO: Pod "pod-projected-secrets-10b7afc9-467b-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:48:32.053: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-10b7afc9-467b-11ea-ab15-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  3 11:48:32.106: INFO: Waiting for pod pod-projected-secrets-10b7afc9-467b-11ea-ab15-0242ac110005 to disappear
Feb  3 11:48:32.116: INFO: Pod pod-projected-secrets-10b7afc9-467b-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:48:32.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bxdl2" for this suite.
Feb  3 11:48:38.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:48:38.409: INFO: namespace: e2e-tests-projected-bxdl2, resource: bindings, ignored listing per whitelist
Feb  3 11:48:38.429: INFO: namespace e2e-tests-projected-bxdl2 deletion completed in 6.306583379s

• [SLOW TEST:25.458 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:48:38.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  3 11:48:38.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-44dtk'
Feb  3 11:48:40.809: INFO: stderr: ""
Feb  3 11:48:40.809: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Feb  3 11:48:40.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-44dtk'
Feb  3 11:48:52.651: INFO: stderr: ""
Feb  3 11:48:52.651: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:48:52.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-44dtk" for this suite.
Feb  3 11:48:58.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:48:58.903: INFO: namespace: e2e-tests-kubectl-44dtk, resource: bindings, ignored listing per whitelist
Feb  3 11:48:58.975: INFO: namespace e2e-tests-kubectl-44dtk deletion completed in 6.230355388s

• [SLOW TEST:20.545 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:48:58.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  3 11:48:59.195: INFO: Waiting up to 5m0s for pod "pod-2a3dd864-467b-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-xck6s" to be "success or failure"
Feb  3 11:48:59.354: INFO: Pod "pod-2a3dd864-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 158.580505ms
Feb  3 11:49:01.708: INFO: Pod "pod-2a3dd864-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.513273933s
Feb  3 11:49:03.730: INFO: Pod "pod-2a3dd864-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.534741326s
Feb  3 11:49:06.010: INFO: Pod "pod-2a3dd864-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.81473636s
Feb  3 11:49:08.026: INFO: Pod "pod-2a3dd864-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.830942948s
Feb  3 11:49:10.049: INFO: Pod "pod-2a3dd864-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.853627768s
Feb  3 11:49:12.464: INFO: Pod "pod-2a3dd864-467b-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.268809574s
STEP: Saw pod success
Feb  3 11:49:12.465: INFO: Pod "pod-2a3dd864-467b-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:49:12.488: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2a3dd864-467b-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 11:49:12.708: INFO: Waiting for pod pod-2a3dd864-467b-11ea-ab15-0242ac110005 to disappear
Feb  3 11:49:12.720: INFO: Pod pod-2a3dd864-467b-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:49:12.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xck6s" for this suite.
Feb  3 11:49:18.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:49:18.939: INFO: namespace: e2e-tests-emptydir-xck6s, resource: bindings, ignored listing per whitelist
Feb  3 11:49:18.972: INFO: namespace e2e-tests-emptydir-xck6s deletion completed in 6.241183271s

• [SLOW TEST:19.997 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:49:18.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  3 11:49:19.198: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  3 11:49:19.244: INFO: Number of nodes with available pods: 0
Feb  3 11:49:19.244: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  3 11:49:19.377: INFO: Number of nodes with available pods: 0
Feb  3 11:49:19.378: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:20.390: INFO: Number of nodes with available pods: 0
Feb  3 11:49:20.390: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:21.417: INFO: Number of nodes with available pods: 0
Feb  3 11:49:21.418: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:22.428: INFO: Number of nodes with available pods: 0
Feb  3 11:49:22.428: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:23.390: INFO: Number of nodes with available pods: 0
Feb  3 11:49:23.390: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:24.390: INFO: Number of nodes with available pods: 0
Feb  3 11:49:24.390: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:25.394: INFO: Number of nodes with available pods: 0
Feb  3 11:49:25.394: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:26.393: INFO: Number of nodes with available pods: 0
Feb  3 11:49:26.393: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:27.388: INFO: Number of nodes with available pods: 0
Feb  3 11:49:27.389: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:28.396: INFO: Number of nodes with available pods: 0
Feb  3 11:49:28.396: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:29.394: INFO: Number of nodes with available pods: 1
Feb  3 11:49:29.394: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  3 11:49:29.568: INFO: Number of nodes with available pods: 1
Feb  3 11:49:29.568: INFO: Number of running nodes: 0, number of available pods: 1
Feb  3 11:49:30.599: INFO: Number of nodes with available pods: 0
Feb  3 11:49:30.599: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  3 11:49:30.632: INFO: Number of nodes with available pods: 0
Feb  3 11:49:30.632: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:31.669: INFO: Number of nodes with available pods: 0
Feb  3 11:49:31.669: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:32.765: INFO: Number of nodes with available pods: 0
Feb  3 11:49:32.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:33.653: INFO: Number of nodes with available pods: 0
Feb  3 11:49:33.653: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:34.676: INFO: Number of nodes with available pods: 0
Feb  3 11:49:34.677: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:35.649: INFO: Number of nodes with available pods: 0
Feb  3 11:49:35.649: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:36.646: INFO: Number of nodes with available pods: 0
Feb  3 11:49:36.646: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:37.649: INFO: Number of nodes with available pods: 0
Feb  3 11:49:37.649: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:38.670: INFO: Number of nodes with available pods: 0
Feb  3 11:49:38.670: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:39.653: INFO: Number of nodes with available pods: 0
Feb  3 11:49:39.653: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:40.672: INFO: Number of nodes with available pods: 0
Feb  3 11:49:40.672: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:41.653: INFO: Number of nodes with available pods: 0
Feb  3 11:49:41.654: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:43.002: INFO: Number of nodes with available pods: 0
Feb  3 11:49:43.002: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:44.102: INFO: Number of nodes with available pods: 0
Feb  3 11:49:44.102: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:44.717: INFO: Number of nodes with available pods: 0
Feb  3 11:49:44.717: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:45.647: INFO: Number of nodes with available pods: 0
Feb  3 11:49:45.647: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 11:49:46.655: INFO: Number of nodes with available pods: 1
Feb  3 11:49:46.655: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tp5vw, will wait for the garbage collector to delete the pods
Feb  3 11:49:46.742: INFO: Deleting DaemonSet.extensions daemon-set took: 18.223875ms
Feb  3 11:49:46.842: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.75593ms
Feb  3 11:49:53.187: INFO: Number of nodes with available pods: 0
Feb  3 11:49:53.187: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 11:49:53.193: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tp5vw/daemonsets","resourceVersion":"20412247"},"items":null}

Feb  3 11:49:53.197: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tp5vw/pods","resourceVersion":"20412247"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:49:53.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-tp5vw" for this suite.
Feb  3 11:49:59.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:49:59.320: INFO: namespace: e2e-tests-daemonsets-tp5vw, resource: bindings, ignored listing per whitelist
Feb  3 11:49:59.430: INFO: namespace e2e-tests-daemonsets-tp5vw deletion completed in 6.17861823s

• [SLOW TEST:40.458 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:49:59.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Feb  3 11:49:59.803: INFO: Waiting up to 5m0s for pod "client-containers-4e5d6d08-467b-11ea-ab15-0242ac110005" in namespace "e2e-tests-containers-qpwz9" to be "success or failure"
Feb  3 11:49:59.844: INFO: Pod "client-containers-4e5d6d08-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.156298ms
Feb  3 11:50:01.876: INFO: Pod "client-containers-4e5d6d08-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07287435s
Feb  3 11:50:03.910: INFO: Pod "client-containers-4e5d6d08-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106209457s
Feb  3 11:50:05.942: INFO: Pod "client-containers-4e5d6d08-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139207124s
Feb  3 11:50:07.954: INFO: Pod "client-containers-4e5d6d08-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150218325s
Feb  3 11:50:09.970: INFO: Pod "client-containers-4e5d6d08-467b-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.166532478s
STEP: Saw pod success
Feb  3 11:50:09.970: INFO: Pod "client-containers-4e5d6d08-467b-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:50:09.975: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-4e5d6d08-467b-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 11:50:10.561: INFO: Waiting for pod client-containers-4e5d6d08-467b-11ea-ab15-0242ac110005 to disappear
Feb  3 11:50:10.772: INFO: Pod client-containers-4e5d6d08-467b-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:50:10.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-qpwz9" for this suite.
Feb  3 11:50:16.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:50:16.898: INFO: namespace: e2e-tests-containers-qpwz9, resource: bindings, ignored listing per whitelist
Feb  3 11:50:17.173: INFO: namespace e2e-tests-containers-qpwz9 deletion completed in 6.380013915s

• [SLOW TEST:17.742 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:50:17.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  3 11:50:17.417: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58dbb3b0-467b-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-z2hsq" to be "success or failure"
Feb  3 11:50:17.490: INFO: Pod "downwardapi-volume-58dbb3b0-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 72.46735ms
Feb  3 11:50:19.888: INFO: Pod "downwardapi-volume-58dbb3b0-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47048898s
Feb  3 11:50:21.916: INFO: Pod "downwardapi-volume-58dbb3b0-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.498278403s
Feb  3 11:50:24.003: INFO: Pod "downwardapi-volume-58dbb3b0-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.585972929s
Feb  3 11:50:26.033: INFO: Pod "downwardapi-volume-58dbb3b0-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.616212215s
Feb  3 11:50:28.052: INFO: Pod "downwardapi-volume-58dbb3b0-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.635132073s
Feb  3 11:50:30.066: INFO: Pod "downwardapi-volume-58dbb3b0-467b-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.648467335s
STEP: Saw pod success
Feb  3 11:50:30.066: INFO: Pod "downwardapi-volume-58dbb3b0-467b-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:50:30.070: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-58dbb3b0-467b-11ea-ab15-0242ac110005 container client-container: 
STEP: delete the pod
Feb  3 11:50:31.699: INFO: Waiting for pod downwardapi-volume-58dbb3b0-467b-11ea-ab15-0242ac110005 to disappear
Feb  3 11:50:31.721: INFO: Pod downwardapi-volume-58dbb3b0-467b-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:50:31.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-z2hsq" for this suite.
Feb  3 11:50:37.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:50:37.971: INFO: namespace: e2e-tests-downward-api-z2hsq, resource: bindings, ignored listing per whitelist
Feb  3 11:50:38.063: INFO: namespace e2e-tests-downward-api-z2hsq deletion completed in 6.326139772s

• [SLOW TEST:20.890 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:50:38.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  3 11:50:38.299: INFO: Waiting up to 5m0s for pod "downward-api-654f4264-467b-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-qr658" to be "success or failure"
Feb  3 11:50:38.320: INFO: Pod "downward-api-654f4264-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.461796ms
Feb  3 11:50:40.341: INFO: Pod "downward-api-654f4264-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042422283s
Feb  3 11:50:42.369: INFO: Pod "downward-api-654f4264-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069778515s
Feb  3 11:50:44.503: INFO: Pod "downward-api-654f4264-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20417468s
Feb  3 11:50:46.535: INFO: Pod "downward-api-654f4264-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.235806894s
Feb  3 11:50:48.568: INFO: Pod "downward-api-654f4264-467b-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.269536657s
STEP: Saw pod success
Feb  3 11:50:48.569: INFO: Pod "downward-api-654f4264-467b-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:50:48.579: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-654f4264-467b-11ea-ab15-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  3 11:50:48.794: INFO: Waiting for pod downward-api-654f4264-467b-11ea-ab15-0242ac110005 to disappear
Feb  3 11:50:48.807: INFO: Pod downward-api-654f4264-467b-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:50:48.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qr658" for this suite.
Feb  3 11:50:54.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:50:55.044: INFO: namespace: e2e-tests-downward-api-qr658, resource: bindings, ignored listing per whitelist
Feb  3 11:50:55.129: INFO: namespace e2e-tests-downward-api-qr658 deletion completed in 6.296800653s

• [SLOW TEST:17.065 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:50:55.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  3 11:50:55.327: INFO: Waiting up to 5m0s for pod "pod-6f75b556-467b-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-rmckb" to be "success or failure"
Feb  3 11:50:55.367: INFO: Pod "pod-6f75b556-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.965105ms
Feb  3 11:50:57.580: INFO: Pod "pod-6f75b556-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253097345s
Feb  3 11:50:59.601: INFO: Pod "pod-6f75b556-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273613386s
Feb  3 11:51:01.617: INFO: Pod "pod-6f75b556-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.29015865s
Feb  3 11:51:03.629: INFO: Pod "pod-6f75b556-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.301849286s
Feb  3 11:51:05.643: INFO: Pod "pod-6f75b556-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.316304671s
Feb  3 11:51:07.658: INFO: Pod "pod-6f75b556-467b-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.330521872s
STEP: Saw pod success
Feb  3 11:51:07.658: INFO: Pod "pod-6f75b556-467b-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:51:07.664: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6f75b556-467b-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 11:51:07.990: INFO: Waiting for pod pod-6f75b556-467b-11ea-ab15-0242ac110005 to disappear
Feb  3 11:51:08.008: INFO: Pod pod-6f75b556-467b-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:51:08.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rmckb" for this suite.
Feb  3 11:51:14.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:51:14.337: INFO: namespace: e2e-tests-emptydir-rmckb, resource: bindings, ignored listing per whitelist
Feb  3 11:51:14.413: INFO: namespace e2e-tests-emptydir-rmckb deletion completed in 6.396686136s

• [SLOW TEST:19.284 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:51:14.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  3 11:51:15.021: INFO: Waiting up to 5m0s for pod "pod-7b1f497a-467b-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-hff9k" to be "success or failure"
Feb  3 11:51:15.068: INFO: Pod "pod-7b1f497a-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.918364ms
Feb  3 11:51:17.087: INFO: Pod "pod-7b1f497a-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066015253s
Feb  3 11:51:19.103: INFO: Pod "pod-7b1f497a-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081892191s
Feb  3 11:51:21.265: INFO: Pod "pod-7b1f497a-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.243797457s
Feb  3 11:51:23.360: INFO: Pod "pod-7b1f497a-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.338592754s
Feb  3 11:51:25.429: INFO: Pod "pod-7b1f497a-467b-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.407542129s
STEP: Saw pod success
Feb  3 11:51:25.429: INFO: Pod "pod-7b1f497a-467b-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:51:25.440: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7b1f497a-467b-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 11:51:25.721: INFO: Waiting for pod pod-7b1f497a-467b-11ea-ab15-0242ac110005 to disappear
Feb  3 11:51:25.732: INFO: Pod pod-7b1f497a-467b-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:51:25.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hff9k" for this suite.
Feb  3 11:51:31.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:51:31.907: INFO: namespace: e2e-tests-emptydir-hff9k, resource: bindings, ignored listing per whitelist
Feb  3 11:51:31.999: INFO: namespace e2e-tests-emptydir-hff9k deletion completed in 6.259926799s

• [SLOW TEST:17.585 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:51:32.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-85760b83-467b-11ea-ab15-0242ac110005
STEP: Creating secret with name s-test-opt-upd-85760d79-467b-11ea-ab15-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-85760b83-467b-11ea-ab15-0242ac110005
STEP: Updating secret s-test-opt-upd-85760d79-467b-11ea-ab15-0242ac110005
STEP: Creating secret with name s-test-opt-create-85760dc3-467b-11ea-ab15-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:52:57.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-68wxd" for this suite.
Feb  3 11:53:21.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:53:21.545: INFO: namespace: e2e-tests-projected-68wxd, resource: bindings, ignored listing per whitelist
Feb  3 11:53:21.558: INFO: namespace e2e-tests-projected-68wxd deletion completed in 24.224730893s

• [SLOW TEST:109.558 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:53:21.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-c6c186d8-467b-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  3 11:53:22.066: INFO: Waiting up to 5m0s for pod "pod-secrets-c6e0cccf-467b-11ea-ab15-0242ac110005" in namespace "e2e-tests-secrets-vx6bs" to be "success or failure"
Feb  3 11:53:22.106: INFO: Pod "pod-secrets-c6e0cccf-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.341366ms
Feb  3 11:53:24.608: INFO: Pod "pod-secrets-c6e0cccf-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.540866027s
Feb  3 11:53:26.662: INFO: Pod "pod-secrets-c6e0cccf-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.594964789s
Feb  3 11:53:28.969: INFO: Pod "pod-secrets-c6e0cccf-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.901950887s
Feb  3 11:53:30.996: INFO: Pod "pod-secrets-c6e0cccf-467b-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.929563097s
Feb  3 11:53:33.067: INFO: Pod "pod-secrets-c6e0cccf-467b-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.000826673s
STEP: Saw pod success
Feb  3 11:53:33.068: INFO: Pod "pod-secrets-c6e0cccf-467b-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:53:33.073: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-c6e0cccf-467b-11ea-ab15-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  3 11:53:33.156: INFO: Waiting for pod pod-secrets-c6e0cccf-467b-11ea-ab15-0242ac110005 to disappear
Feb  3 11:53:33.204: INFO: Pod pod-secrets-c6e0cccf-467b-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:53:33.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-vx6bs" for this suite.
Feb  3 11:53:39.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:53:39.450: INFO: namespace: e2e-tests-secrets-vx6bs, resource: bindings, ignored listing per whitelist
Feb  3 11:53:39.495: INFO: namespace e2e-tests-secrets-vx6bs deletion completed in 6.280190107s
STEP: Destroying namespace "e2e-tests-secret-namespace-pfs6z" for this suite.
Feb  3 11:53:45.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:53:45.640: INFO: namespace: e2e-tests-secret-namespace-pfs6z, resource: bindings, ignored listing per whitelist
Feb  3 11:53:45.689: INFO: namespace e2e-tests-secret-namespace-pfs6z deletion completed in 6.194762368s

• [SLOW TEST:24.131 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:53:45.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-wqcmm
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Feb  3 11:53:45.930: INFO: Found 0 stateful pods, waiting for 3
Feb  3 11:53:55.946: INFO: Found 1 stateful pods, waiting for 3
Feb  3 11:54:05.943: INFO: Found 2 stateful pods, waiting for 3
Feb  3 11:54:15.952: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 11:54:15.952: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 11:54:15.952: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  3 11:54:25.951: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 11:54:25.952: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 11:54:25.952: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 11:54:26.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqcmm ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 11:54:26.956: INFO: stderr: "I0203 11:54:26.234575    2888 log.go:172] (0xc000138160) (0xc00072a6e0) Create stream\nI0203 11:54:26.234986    2888 log.go:172] (0xc000138160) (0xc00072a6e0) Stream added, broadcasting: 1\nI0203 11:54:26.243640    2888 log.go:172] (0xc000138160) Reply frame received for 1\nI0203 11:54:26.243692    2888 log.go:172] (0xc000138160) (0xc00034ea00) Create stream\nI0203 11:54:26.243709    2888 log.go:172] (0xc000138160) (0xc00034ea00) Stream added, broadcasting: 3\nI0203 11:54:26.245233    2888 log.go:172] (0xc000138160) Reply frame received for 3\nI0203 11:54:26.245261    2888 log.go:172] (0xc000138160) (0xc0000f0c80) Create stream\nI0203 11:54:26.245276    2888 log.go:172] (0xc000138160) (0xc0000f0c80) Stream added, broadcasting: 5\nI0203 11:54:26.247009    2888 log.go:172] (0xc000138160) Reply frame received for 5\nI0203 11:54:26.550158    2888 log.go:172] (0xc000138160) Data frame received for 3\nI0203 11:54:26.550294    2888 log.go:172] (0xc00034ea00) (3) Data frame handling\nI0203 11:54:26.550347    2888 log.go:172] (0xc00034ea00) (3) Data frame sent\nI0203 11:54:26.942106    2888 log.go:172] (0xc000138160) Data frame received for 1\nI0203 11:54:26.942212    2888 log.go:172] (0xc000138160) (0xc00034ea00) Stream removed, broadcasting: 3\nI0203 11:54:26.942283    2888 log.go:172] (0xc00072a6e0) (1) Data frame handling\nI0203 11:54:26.942329    2888 log.go:172] (0xc00072a6e0) (1) Data frame sent\nI0203 11:54:26.942417    2888 log.go:172] (0xc000138160) (0xc0000f0c80) Stream removed, broadcasting: 5\nI0203 11:54:26.942535    2888 log.go:172] (0xc000138160) (0xc00072a6e0) Stream removed, broadcasting: 1\nI0203 11:54:26.942590    2888 log.go:172] (0xc000138160) Go away received\nI0203 11:54:26.943685    2888 log.go:172] (0xc000138160) (0xc00072a6e0) Stream removed, broadcasting: 1\nI0203 11:54:26.943695    2888 log.go:172] (0xc000138160) (0xc00034ea00) Stream removed, broadcasting: 3\nI0203 11:54:26.943701    2888 log.go:172] (0xc000138160) (0xc0000f0c80) Stream removed, broadcasting: 5\n"
Feb  3 11:54:26.957: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 11:54:26.957: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  3 11:54:37.038: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  3 11:54:47.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqcmm ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:54:48.674: INFO: stderr: "I0203 11:54:47.479096    2911 log.go:172] (0xc0007c82c0) (0xc0005c9360) Create stream\nI0203 11:54:47.479363    2911 log.go:172] (0xc0007c82c0) (0xc0005c9360) Stream added, broadcasting: 1\nI0203 11:54:47.484442    2911 log.go:172] (0xc0007c82c0) Reply frame received for 1\nI0203 11:54:47.484474    2911 log.go:172] (0xc0007c82c0) (0xc0005c9400) Create stream\nI0203 11:54:47.484483    2911 log.go:172] (0xc0007c82c0) (0xc0005c9400) Stream added, broadcasting: 3\nI0203 11:54:47.485656    2911 log.go:172] (0xc0007c82c0) Reply frame received for 3\nI0203 11:54:47.485790    2911 log.go:172] (0xc0007c82c0) (0xc00036c000) Create stream\nI0203 11:54:47.485803    2911 log.go:172] (0xc0007c82c0) (0xc00036c000) Stream added, broadcasting: 5\nI0203 11:54:47.488859    2911 log.go:172] (0xc0007c82c0) Reply frame received for 5\nI0203 11:54:48.377292    2911 log.go:172] (0xc0007c82c0) Data frame received for 3\nI0203 11:54:48.377464    2911 log.go:172] (0xc0005c9400) (3) Data frame handling\nI0203 11:54:48.377501    2911 log.go:172] (0xc0005c9400) (3) Data frame sent\nI0203 11:54:48.652471    2911 log.go:172] (0xc0007c82c0) Data frame received for 1\nI0203 11:54:48.652606    2911 log.go:172] (0xc0005c9360) (1) Data frame handling\nI0203 11:54:48.652636    2911 log.go:172] (0xc0005c9360) (1) Data frame sent\nI0203 11:54:48.654474    2911 log.go:172] (0xc0007c82c0) (0xc0005c9360) Stream removed, broadcasting: 1\nI0203 11:54:48.654787    2911 log.go:172] (0xc0007c82c0) (0xc00036c000) Stream removed, broadcasting: 5\nI0203 11:54:48.655249    2911 log.go:172] (0xc0007c82c0) (0xc0005c9400) Stream removed, broadcasting: 3\nI0203 11:54:48.655671    2911 log.go:172] (0xc0007c82c0) (0xc0005c9360) Stream removed, broadcasting: 1\nI0203 11:54:48.655705    2911 log.go:172] (0xc0007c82c0) (0xc0005c9400) Stream removed, broadcasting: 3\nI0203 11:54:48.655722    2911 log.go:172] (0xc0007c82c0) (0xc00036c000) Stream removed, broadcasting: 5\nI0203 11:54:48.656080    2911 log.go:172] (0xc0007c82c0) Go away received\n"
Feb  3 11:54:48.674: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 11:54:48.674: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 11:54:58.789: INFO: Waiting for StatefulSet e2e-tests-statefulset-wqcmm/ss2 to complete update
Feb  3 11:54:58.789: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 11:54:58.789: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 11:54:58.789: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 11:55:09.160: INFO: Waiting for StatefulSet e2e-tests-statefulset-wqcmm/ss2 to complete update
Feb  3 11:55:09.160: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 11:55:09.160: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 11:55:18.824: INFO: Waiting for StatefulSet e2e-tests-statefulset-wqcmm/ss2 to complete update
Feb  3 11:55:18.824: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 11:55:18.824: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 11:55:28.815: INFO: Waiting for StatefulSet e2e-tests-statefulset-wqcmm/ss2 to complete update
Feb  3 11:55:28.815: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 11:55:38.830: INFO: Waiting for StatefulSet e2e-tests-statefulset-wqcmm/ss2 to complete update
Feb  3 11:55:38.830: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 11:55:48.878: INFO: Waiting for StatefulSet e2e-tests-statefulset-wqcmm/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  3 11:55:58.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqcmm ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 11:55:59.516: INFO: stderr: "I0203 11:55:59.119409    2933 log.go:172] (0xc00015a790) (0xc000679540) Create stream\nI0203 11:55:59.119626    2933 log.go:172] (0xc00015a790) (0xc000679540) Stream added, broadcasting: 1\nI0203 11:55:59.126255    2933 log.go:172] (0xc00015a790) Reply frame received for 1\nI0203 11:55:59.126315    2933 log.go:172] (0xc00015a790) (0xc0003bc000) Create stream\nI0203 11:55:59.126325    2933 log.go:172] (0xc00015a790) (0xc0003bc000) Stream added, broadcasting: 3\nI0203 11:55:59.129039    2933 log.go:172] (0xc00015a790) Reply frame received for 3\nI0203 11:55:59.129101    2933 log.go:172] (0xc00015a790) (0xc000676000) Create stream\nI0203 11:55:59.129116    2933 log.go:172] (0xc00015a790) (0xc000676000) Stream added, broadcasting: 5\nI0203 11:55:59.130334    2933 log.go:172] (0xc00015a790) Reply frame received for 5\nI0203 11:55:59.371497    2933 log.go:172] (0xc00015a790) Data frame received for 3\nI0203 11:55:59.371932    2933 log.go:172] (0xc0003bc000) (3) Data frame handling\nI0203 11:55:59.372045    2933 log.go:172] (0xc0003bc000) (3) Data frame sent\nI0203 11:55:59.501255    2933 log.go:172] (0xc00015a790) (0xc000676000) Stream removed, broadcasting: 5\nI0203 11:55:59.502084    2933 log.go:172] (0xc00015a790) Data frame received for 1\nI0203 11:55:59.502226    2933 log.go:172] (0xc00015a790) (0xc0003bc000) Stream removed, broadcasting: 3\nI0203 11:55:59.502319    2933 log.go:172] (0xc000679540) (1) Data frame handling\nI0203 11:55:59.502368    2933 log.go:172] (0xc000679540) (1) Data frame sent\nI0203 11:55:59.502387    2933 log.go:172] (0xc00015a790) (0xc000679540) Stream removed, broadcasting: 1\nI0203 11:55:59.502405    2933 log.go:172] (0xc00015a790) Go away received\nI0203 11:55:59.503451    2933 log.go:172] (0xc00015a790) (0xc000679540) Stream removed, broadcasting: 1\nI0203 11:55:59.503507    2933 log.go:172] (0xc00015a790) (0xc0003bc000) Stream removed, broadcasting: 3\nI0203 11:55:59.503550    2933 log.go:172] (0xc00015a790) (0xc000676000) Stream removed, broadcasting: 5\n"
Feb  3 11:55:59.516: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 11:55:59.516: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 11:56:09.592: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  3 11:56:19.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqcmm ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 11:56:20.256: INFO: stderr: "I0203 11:56:19.990815    2956 log.go:172] (0xc000522210) (0xc0006065a0) Create stream\nI0203 11:56:19.991099    2956 log.go:172] (0xc000522210) (0xc0006065a0) Stream added, broadcasting: 1\nI0203 11:56:19.996707    2956 log.go:172] (0xc000522210) Reply frame received for 1\nI0203 11:56:19.996816    2956 log.go:172] (0xc000522210) (0xc0006ea000) Create stream\nI0203 11:56:19.996829    2956 log.go:172] (0xc000522210) (0xc0006ea000) Stream added, broadcasting: 3\nI0203 11:56:19.998379    2956 log.go:172] (0xc000522210) Reply frame received for 3\nI0203 11:56:19.998444    2956 log.go:172] (0xc000522210) (0xc00069ac80) Create stream\nI0203 11:56:19.998473    2956 log.go:172] (0xc000522210) (0xc00069ac80) Stream added, broadcasting: 5\nI0203 11:56:19.999937    2956 log.go:172] (0xc000522210) Reply frame received for 5\nI0203 11:56:20.104811    2956 log.go:172] (0xc000522210) Data frame received for 3\nI0203 11:56:20.104908    2956 log.go:172] (0xc0006ea000) (3) Data frame handling\nI0203 11:56:20.104950    2956 log.go:172] (0xc0006ea000) (3) Data frame sent\nI0203 11:56:20.243627    2956 log.go:172] (0xc000522210) Data frame received for 1\nI0203 11:56:20.243886    2956 log.go:172] (0xc000522210) (0xc00069ac80) Stream removed, broadcasting: 5\nI0203 11:56:20.244001    2956 log.go:172] (0xc0006065a0) (1) Data frame handling\nI0203 11:56:20.244067    2956 log.go:172] (0xc000522210) (0xc0006ea000) Stream removed, broadcasting: 3\nI0203 11:56:20.244156    2956 log.go:172] (0xc0006065a0) (1) Data frame sent\nI0203 11:56:20.244175    2956 log.go:172] (0xc000522210) (0xc0006065a0) Stream removed, broadcasting: 1\nI0203 11:56:20.244239    2956 log.go:172] (0xc000522210) Go away received\nI0203 11:56:20.245009    2956 log.go:172] (0xc000522210) (0xc0006065a0) Stream removed, broadcasting: 1\nI0203 11:56:20.245073    2956 log.go:172] (0xc000522210) (0xc0006ea000) Stream removed, broadcasting: 3\nI0203 11:56:20.245095    2956 log.go:172] (0xc000522210) (0xc00069ac80) Stream removed, broadcasting: 5\n"
Feb  3 11:56:20.256: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 11:56:20.256: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 11:56:30.331: INFO: Waiting for StatefulSet e2e-tests-statefulset-wqcmm/ss2 to complete update
Feb  3 11:56:30.331: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 11:56:30.331: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 11:56:30.331: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 11:56:41.059: INFO: Waiting for StatefulSet e2e-tests-statefulset-wqcmm/ss2 to complete update
Feb  3 11:56:41.059: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 11:56:41.059: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 11:56:50.354: INFO: Waiting for StatefulSet e2e-tests-statefulset-wqcmm/ss2 to complete update
Feb  3 11:56:50.354: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 11:56:50.354: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 11:57:00.389: INFO: Waiting for StatefulSet e2e-tests-statefulset-wqcmm/ss2 to complete update
Feb  3 11:57:00.389: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 11:57:10.363: INFO: Waiting for StatefulSet e2e-tests-statefulset-wqcmm/ss2 to complete update
Feb  3 11:57:10.363: INFO: Waiting for Pod e2e-tests-statefulset-wqcmm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 11:57:20.385: INFO: Waiting for StatefulSet e2e-tests-statefulset-wqcmm/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  3 11:57:30.359: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wqcmm
Feb  3 11:57:30.373: INFO: Scaling statefulset ss2 to 0
Feb  3 11:57:50.427: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 11:57:50.436: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:57:50.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-wqcmm" for this suite.
Feb  3 11:57:58.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:57:58.921: INFO: namespace: e2e-tests-statefulset-wqcmm, resource: bindings, ignored listing per whitelist
Feb  3 11:57:58.938: INFO: namespace e2e-tests-statefulset-wqcmm deletion completed in 8.391914031s

• [SLOW TEST:253.248 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:57:58.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-6c15d632-467c-11ea-ab15-0242ac110005
Feb  3 11:57:59.172: INFO: Pod name my-hostname-basic-6c15d632-467c-11ea-ab15-0242ac110005: Found 0 pods out of 1
Feb  3 11:58:04.220: INFO: Pod name my-hostname-basic-6c15d632-467c-11ea-ab15-0242ac110005: Found 1 pods out of 1
Feb  3 11:58:04.220: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-6c15d632-467c-11ea-ab15-0242ac110005" are running
Feb  3 11:58:10.245: INFO: Pod "my-hostname-basic-6c15d632-467c-11ea-ab15-0242ac110005-jjcj4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 11:57:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 11:57:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6c15d632-467c-11ea-ab15-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 11:57:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6c15d632-467c-11ea-ab15-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 11:57:59 +0000 UTC Reason: Message:}])
Feb  3 11:58:10.245: INFO: Trying to dial the pod
Feb  3 11:58:15.287: INFO: Controller my-hostname-basic-6c15d632-467c-11ea-ab15-0242ac110005: Got expected result from replica 1 [my-hostname-basic-6c15d632-467c-11ea-ab15-0242ac110005-jjcj4]: "my-hostname-basic-6c15d632-467c-11ea-ab15-0242ac110005-jjcj4", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:58:15.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-sbnnj" for this suite.
Feb  3 11:58:21.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:58:21.846: INFO: namespace: e2e-tests-replication-controller-sbnnj, resource: bindings, ignored listing per whitelist
Feb  3 11:58:21.876: INFO: namespace e2e-tests-replication-controller-sbnnj deletion completed in 6.573082184s

• [SLOW TEST:22.937 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:58:21.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-79c7dc4f-467c-11ea-ab15-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-79c7dc4f-467c-11ea-ab15-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:58:36.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w9mjj" for this suite.
Feb  3 11:59:00.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:59:00.959: INFO: namespace: e2e-tests-projected-w9mjj, resource: bindings, ignored listing per whitelist
Feb  3 11:59:00.994: INFO: namespace e2e-tests-projected-w9mjj deletion completed in 24.377334348s

• [SLOW TEST:39.117 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:59:00.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  3 11:59:01.293: INFO: Waiting up to 5m0s for pod "downwardapi-volume-911d9010-467c-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-cwhkz" to be "success or failure"
Feb  3 11:59:01.364: INFO: Pod "downwardapi-volume-911d9010-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 71.653697ms
Feb  3 11:59:03.379: INFO: Pod "downwardapi-volume-911d9010-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086730854s
Feb  3 11:59:05.403: INFO: Pod "downwardapi-volume-911d9010-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110851701s
Feb  3 11:59:07.418: INFO: Pod "downwardapi-volume-911d9010-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125394211s
Feb  3 11:59:09.486: INFO: Pod "downwardapi-volume-911d9010-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1938076s
Feb  3 11:59:11.954: INFO: Pod "downwardapi-volume-911d9010-467c-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.661577979s
STEP: Saw pod success
Feb  3 11:59:11.954: INFO: Pod "downwardapi-volume-911d9010-467c-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 11:59:11.980: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-911d9010-467c-11ea-ab15-0242ac110005 container client-container: 
STEP: delete the pod
Feb  3 11:59:12.302: INFO: Waiting for pod downwardapi-volume-911d9010-467c-11ea-ab15-0242ac110005 to disappear
Feb  3 11:59:12.322: INFO: Pod downwardapi-volume-911d9010-467c-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:59:12.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-cwhkz" for this suite.
Feb  3 11:59:18.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 11:59:18.662: INFO: namespace: e2e-tests-downward-api-cwhkz, resource: bindings, ignored listing per whitelist
Feb  3 11:59:18.735: INFO: namespace e2e-tests-downward-api-cwhkz deletion completed in 6.401290559s

• [SLOW TEST:17.741 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 11:59:18.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-6jzv
STEP: Creating a pod to test atomic-volume-subpath
Feb  3 11:59:20.643: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6jzv" in namespace "e2e-tests-subpath-g9dj9" to be "success or failure"
Feb  3 11:59:20.666: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Pending", Reason="", readiness=false. Elapsed: 22.507193ms
Feb  3 11:59:22.770: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126152691s
Feb  3 11:59:24.781: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137903167s
Feb  3 11:59:26.799: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155897406s
Feb  3 11:59:28.843: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199545223s
Feb  3 11:59:30.876: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.232771756s
Feb  3 11:59:32.902: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.258636399s
Feb  3 11:59:35.003: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.360040729s
Feb  3 11:59:37.021: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.377989976s
Feb  3 11:59:39.077: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Running", Reason="", readiness=false. Elapsed: 18.433895453s
Feb  3 11:59:41.108: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Running", Reason="", readiness=false. Elapsed: 20.464700135s
Feb  3 11:59:43.118: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Running", Reason="", readiness=false. Elapsed: 22.474844022s
Feb  3 11:59:45.154: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Running", Reason="", readiness=false. Elapsed: 24.510240918s
Feb  3 11:59:47.172: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Running", Reason="", readiness=false. Elapsed: 26.528765526s
Feb  3 11:59:49.201: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Running", Reason="", readiness=false. Elapsed: 28.55779972s
Feb  3 11:59:51.230: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Running", Reason="", readiness=false. Elapsed: 30.586786659s
Feb  3 11:59:53.272: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Running", Reason="", readiness=false. Elapsed: 32.628312101s
Feb  3 11:59:55.294: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Running", Reason="", readiness=false. Elapsed: 34.650328162s
Feb  3 11:59:57.310: INFO: Pod "pod-subpath-test-configmap-6jzv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.666991s
STEP: Saw pod success
Feb  3 11:59:57.311: INFO: Pod "pod-subpath-test-configmap-6jzv" satisfied condition "success or failure"
Feb  3 11:59:57.324: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-6jzv container test-container-subpath-configmap-6jzv: 
STEP: delete the pod
Feb  3 11:59:57.614: INFO: Waiting for pod pod-subpath-test-configmap-6jzv to disappear
Feb  3 11:59:57.640: INFO: Pod pod-subpath-test-configmap-6jzv no longer exists
STEP: Deleting pod pod-subpath-test-configmap-6jzv
Feb  3 11:59:57.640: INFO: Deleting pod "pod-subpath-test-configmap-6jzv" in namespace "e2e-tests-subpath-g9dj9"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 11:59:57.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-g9dj9" for this suite.
Feb  3 12:00:03.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:00:03.976: INFO: namespace: e2e-tests-subpath-g9dj9, resource: bindings, ignored listing per whitelist
Feb  3 12:00:04.036: INFO: namespace e2e-tests-subpath-g9dj9 deletion completed in 6.383153567s

• [SLOW TEST:45.300 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:00:04.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  3 12:00:13.163: INFO: Successfully updated pod "labelsupdateb6aa6143-467c-11ea-ab15-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:00:17.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fndbg" for this suite.
Feb  3 12:00:41.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:00:41.688: INFO: namespace: e2e-tests-projected-fndbg, resource: bindings, ignored listing per whitelist
Feb  3 12:00:41.744: INFO: namespace e2e-tests-projected-fndbg deletion completed in 24.358680948s

• [SLOW TEST:37.708 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:00:41.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  3 12:00:42.030: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd29543b-467c-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-5rnhp" to be "success or failure"
Feb  3 12:00:42.071: INFO: Pod "downwardapi-volume-cd29543b-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.527741ms
Feb  3 12:00:44.090: INFO: Pod "downwardapi-volume-cd29543b-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059781403s
Feb  3 12:00:46.112: INFO: Pod "downwardapi-volume-cd29543b-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082093698s
Feb  3 12:00:48.159: INFO: Pod "downwardapi-volume-cd29543b-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129289511s
Feb  3 12:00:50.186: INFO: Pod "downwardapi-volume-cd29543b-467c-11ea-ab15-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.15611377s
Feb  3 12:00:52.760: INFO: Pod "downwardapi-volume-cd29543b-467c-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.730284961s
STEP: Saw pod success
Feb  3 12:00:52.760: INFO: Pod "downwardapi-volume-cd29543b-467c-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:00:52.769: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cd29543b-467c-11ea-ab15-0242ac110005 container client-container: 
STEP: delete the pod
Feb  3 12:00:53.221: INFO: Waiting for pod downwardapi-volume-cd29543b-467c-11ea-ab15-0242ac110005 to disappear
Feb  3 12:00:53.263: INFO: Pod downwardapi-volume-cd29543b-467c-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:00:53.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5rnhp" for this suite.
Feb  3 12:00:59.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:00:59.422: INFO: namespace: e2e-tests-downward-api-5rnhp, resource: bindings, ignored listing per whitelist
Feb  3 12:00:59.486: INFO: namespace e2e-tests-downward-api-5rnhp deletion completed in 6.209268247s

• [SLOW TEST:17.741 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:00:59.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0203 12:01:40.697986       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 12:01:40.698: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:01:40.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-hgrt2" for this suite.
Feb  3 12:01:52.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:01:52.615: INFO: namespace: e2e-tests-gc-hgrt2, resource: bindings, ignored listing per whitelist
Feb  3 12:01:53.256: INFO: namespace e2e-tests-gc-hgrt2 deletion completed in 12.505568962s

• [SLOW TEST:53.769 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:01:53.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-f7db50c3-467c-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  3 12:01:53.736: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-7snpg" to be "success or failure"
Feb  3 12:01:53.779: INFO: Pod "pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.780439ms
Feb  3 12:01:56.918: INFO: Pod "pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.182663878s
Feb  3 12:01:58.928: INFO: Pod "pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.192751413s
Feb  3 12:02:00.943: INFO: Pod "pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.207343905s
Feb  3 12:02:03.285: INFO: Pod "pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.54892562s
Feb  3 12:02:05.301: INFO: Pod "pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.565121256s
Feb  3 12:02:07.316: INFO: Pod "pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.580642487s
Feb  3 12:02:09.376: INFO: Pod "pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.639878682s
Feb  3 12:02:11.405: INFO: Pod "pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.669303307s
Feb  3 12:02:13.444: INFO: Pod "pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.708536323s
Feb  3 12:02:15.673: INFO: Pod "pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.937749157s
STEP: Saw pod success
Feb  3 12:02:15.674: INFO: Pod "pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:02:15.687: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  3 12:02:15.868: INFO: Waiting for pod pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005 to disappear
Feb  3 12:02:15.980: INFO: Pod pod-projected-secrets-f7dcf3fd-467c-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:02:15.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7snpg" for this suite.
Feb  3 12:02:22.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:02:22.130: INFO: namespace: e2e-tests-projected-7snpg, resource: bindings, ignored listing per whitelist
Feb  3 12:02:22.227: INFO: namespace e2e-tests-projected-7snpg deletion completed in 6.227421598s

• [SLOW TEST:28.971 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:02:22.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  3 12:02:22.432: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:02:42.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-t9wbz" for this suite.
Feb  3 12:02:48.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:02:48.932: INFO: namespace: e2e-tests-init-container-t9wbz, resource: bindings, ignored listing per whitelist
Feb  3 12:02:49.277: INFO: namespace e2e-tests-init-container-t9wbz deletion completed in 6.740185315s

• [SLOW TEST:27.050 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:02:49.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-82q2l
Feb  3 12:03:01.757: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-82q2l
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 12:03:01.763: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:07:02.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-82q2l" for this suite.
Feb  3 12:07:10.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:07:10.907: INFO: namespace: e2e-tests-container-probe-82q2l, resource: bindings, ignored listing per whitelist
Feb  3 12:07:11.004: INFO: namespace e2e-tests-container-probe-82q2l deletion completed in 8.296363093s

• [SLOW TEST:261.722 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:07:11.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Feb  3 12:07:11.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-798bg run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb  3 12:07:24.144: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0203 12:07:22.503613    2979 log.go:172] (0xc000734210) (0xc000896000) Create stream\nI0203 12:07:22.504006    2979 log.go:172] (0xc000734210) (0xc000896000) Stream added, broadcasting: 1\nI0203 12:07:22.523052    2979 log.go:172] (0xc000734210) Reply frame received for 1\nI0203 12:07:22.523416    2979 log.go:172] (0xc000734210) (0xc0008960a0) Create stream\nI0203 12:07:22.523483    2979 log.go:172] (0xc000734210) (0xc0008960a0) Stream added, broadcasting: 3\nI0203 12:07:22.528951    2979 log.go:172] (0xc000734210) Reply frame received for 3\nI0203 12:07:22.529250    2979 log.go:172] (0xc000734210) (0xc0005d14a0) Create stream\nI0203 12:07:22.529306    2979 log.go:172] (0xc000734210) (0xc0005d14a0) Stream added, broadcasting: 5\nI0203 12:07:22.533462    2979 log.go:172] (0xc000734210) Reply frame received for 5\nI0203 12:07:22.533617    2979 log.go:172] (0xc000734210) (0xc0005d1540) Create stream\nI0203 12:07:22.533655    2979 log.go:172] (0xc000734210) (0xc0005d1540) Stream added, broadcasting: 7\nI0203 12:07:22.580110    2979 log.go:172] (0xc000734210) Reply frame received for 7\nI0203 12:07:22.581631    2979 log.go:172] (0xc0008960a0) (3) Writing data frame\nI0203 12:07:22.583348    2979 log.go:172] (0xc0008960a0) (3) Writing data frame\nI0203 12:07:22.684911    2979 log.go:172] (0xc000734210) Data frame received for 5\nI0203 12:07:22.685219    2979 log.go:172] (0xc0005d14a0) (5) Data frame handling\nI0203 12:07:22.685285    2979 log.go:172] (0xc0005d14a0) (5) Data frame sent\nI0203 12:07:22.685302    2979 log.go:172] (0xc000734210) Data frame received for 5\nI0203 12:07:22.685321    2979 log.go:172] (0xc0005d14a0) (5) Data frame handling\nI0203 12:07:22.685540    2979 log.go:172] (0xc0005d14a0) (5) Data frame sent\nI0203 12:07:23.982791    2979 log.go:172] (0xc000734210) (0xc0008960a0) Stream removed, broadcasting: 3\nI0203 12:07:23.983159    2979 log.go:172] (0xc000734210) Data frame received for 1\nI0203 12:07:23.983213    2979 log.go:172] (0xc000896000) (1) Data frame handling\nI0203 12:07:23.983265    2979 log.go:172] (0xc000896000) (1) Data frame sent\nI0203 12:07:23.983295    2979 log.go:172] (0xc000734210) (0xc000896000) Stream removed, broadcasting: 1\nI0203 12:07:23.983570    2979 log.go:172] (0xc000734210) (0xc0005d14a0) Stream removed, broadcasting: 5\nI0203 12:07:23.983733    2979 log.go:172] (0xc000734210) (0xc0005d1540) Stream removed, broadcasting: 7\nI0203 12:07:23.983870    2979 log.go:172] (0xc000734210) Go away received\nI0203 12:07:23.984223    2979 log.go:172] (0xc000734210) (0xc000896000) Stream removed, broadcasting: 1\nI0203 12:07:23.984259    2979 log.go:172] (0xc000734210) (0xc0008960a0) Stream removed, broadcasting: 3\nI0203 12:07:23.984274    2979 log.go:172] (0xc000734210) (0xc0005d14a0) Stream removed, broadcasting: 5\nI0203 12:07:23.984283    2979 log.go:172] (0xc000734210) (0xc0005d1540) Stream removed, broadcasting: 7\n"
Feb  3 12:07:24.145: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:07:26.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-798bg" for this suite.
Feb  3 12:07:34.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:07:35.153: INFO: namespace: e2e-tests-kubectl-798bg, resource: bindings, ignored listing per whitelist
Feb  3 12:07:35.221: INFO: namespace e2e-tests-kubectl-798bg deletion completed in 8.380358261s

• [SLOW TEST:24.217 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:07:35.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-f7dtt/configmap-test-c38f6adc-467d-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  3 12:07:35.456: INFO: Waiting up to 5m0s for pod "pod-configmaps-c3922df5-467d-11ea-ab15-0242ac110005" in namespace "e2e-tests-configmap-f7dtt" to be "success or failure"
Feb  3 12:07:35.635: INFO: Pod "pod-configmaps-c3922df5-467d-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 179.186081ms
Feb  3 12:07:37.733: INFO: Pod "pod-configmaps-c3922df5-467d-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.276721984s
Feb  3 12:07:39.756: INFO: Pod "pod-configmaps-c3922df5-467d-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299962074s
Feb  3 12:07:42.086: INFO: Pod "pod-configmaps-c3922df5-467d-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.630167879s
Feb  3 12:07:44.102: INFO: Pod "pod-configmaps-c3922df5-467d-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.645707853s
Feb  3 12:07:46.119: INFO: Pod "pod-configmaps-c3922df5-467d-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.663125131s
STEP: Saw pod success
Feb  3 12:07:46.119: INFO: Pod "pod-configmaps-c3922df5-467d-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:07:46.124: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c3922df5-467d-11ea-ab15-0242ac110005 container env-test: 
STEP: delete the pod
Feb  3 12:07:46.472: INFO: Waiting for pod pod-configmaps-c3922df5-467d-11ea-ab15-0242ac110005 to disappear
Feb  3 12:07:46.568: INFO: Pod pod-configmaps-c3922df5-467d-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:07:46.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-f7dtt" for this suite.
Feb  3 12:07:53.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:07:53.509: INFO: namespace: e2e-tests-configmap-f7dtt, resource: bindings, ignored listing per whitelist
Feb  3 12:07:53.607: INFO: namespace e2e-tests-configmap-f7dtt deletion completed in 7.013608928s

• [SLOW TEST:18.386 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:07:53.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:08:04.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-82ffg" for this suite.
Feb  3 12:08:58.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:08:58.233: INFO: namespace: e2e-tests-kubelet-test-82ffg, resource: bindings, ignored listing per whitelist
Feb  3 12:08:58.273: INFO: namespace e2e-tests-kubelet-test-82ffg deletion completed in 54.222637204s

• [SLOW TEST:64.665 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:08:58.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  3 12:08:58.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-gqbj9'
Feb  3 12:08:58.851: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 12:08:58.851: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  3 12:08:58.875: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-q52dh]
Feb  3 12:08:58.875: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-q52dh" in namespace "e2e-tests-kubectl-gqbj9" to be "running and ready"
Feb  3 12:08:58.882: INFO: Pod "e2e-test-nginx-rc-q52dh": Phase="Pending", Reason="", readiness=false. Elapsed: 7.088138ms
Feb  3 12:09:00.903: INFO: Pod "e2e-test-nginx-rc-q52dh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027935628s
Feb  3 12:09:02.922: INFO: Pod "e2e-test-nginx-rc-q52dh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046661621s
Feb  3 12:09:05.086: INFO: Pod "e2e-test-nginx-rc-q52dh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210769049s
Feb  3 12:09:07.103: INFO: Pod "e2e-test-nginx-rc-q52dh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22766012s
Feb  3 12:09:09.120: INFO: Pod "e2e-test-nginx-rc-q52dh": Phase="Running", Reason="", readiness=true. Elapsed: 10.245329427s
Feb  3 12:09:09.121: INFO: Pod "e2e-test-nginx-rc-q52dh" satisfied condition "running and ready"
Feb  3 12:09:09.121: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-q52dh]
Feb  3 12:09:09.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-gqbj9'
Feb  3 12:09:09.354: INFO: stderr: ""
Feb  3 12:09:09.355: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb  3 12:09:09.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-gqbj9'
Feb  3 12:09:09.596: INFO: stderr: ""
Feb  3 12:09:09.596: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:09:09.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gqbj9" for this suite.
Feb  3 12:09:15.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:09:15.772: INFO: namespace: e2e-tests-kubectl-gqbj9, resource: bindings, ignored listing per whitelist
Feb  3 12:09:15.920: INFO: namespace e2e-tests-kubectl-gqbj9 deletion completed in 6.318394478s

• [SLOW TEST:17.647 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:09:15.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  3 12:09:26.895: INFO: Successfully updated pod "annotationupdateff9a515d-467d-11ea-ab15-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:09:28.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-brbd9" for this suite.
Feb  3 12:09:45.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:09:45.121: INFO: namespace: e2e-tests-projected-brbd9, resource: bindings, ignored listing per whitelist
Feb  3 12:09:45.336: INFO: namespace e2e-tests-projected-brbd9 deletion completed in 16.352927101s

• [SLOW TEST:29.416 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:09:45.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  3 12:09:45.631: INFO: Waiting up to 5m0s for pod "downward-api-112c9573-467e-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-bjq2k" to be "success or failure"
Feb  3 12:09:45.651: INFO: Pod "downward-api-112c9573-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.569122ms
Feb  3 12:09:48.083: INFO: Pod "downward-api-112c9573-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.452117423s
Feb  3 12:09:50.110: INFO: Pod "downward-api-112c9573-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.478557264s
Feb  3 12:09:52.144: INFO: Pod "downward-api-112c9573-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.512445056s
Feb  3 12:09:54.169: INFO: Pod "downward-api-112c9573-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537449617s
Feb  3 12:09:56.252: INFO: Pod "downward-api-112c9573-467e-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.620638101s
STEP: Saw pod success
Feb  3 12:09:56.252: INFO: Pod "downward-api-112c9573-467e-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:09:56.261: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-112c9573-467e-11ea-ab15-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  3 12:09:56.483: INFO: Waiting for pod downward-api-112c9573-467e-11ea-ab15-0242ac110005 to disappear
Feb  3 12:09:56.502: INFO: Pod downward-api-112c9573-467e-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:09:56.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bjq2k" for this suite.
Feb  3 12:10:02.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:10:02.709: INFO: namespace: e2e-tests-downward-api-bjq2k, resource: bindings, ignored listing per whitelist
Feb  3 12:10:02.762: INFO: namespace e2e-tests-downward-api-bjq2k deletion completed in 6.239451457s

• [SLOW TEST:17.425 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:10:02.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  3 12:10:02.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-2vdbk'
Feb  3 12:10:03.093: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 12:10:03.093: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Feb  3 12:10:07.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-2vdbk'
Feb  3 12:10:09.096: INFO: stderr: ""
Feb  3 12:10:09.096: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:10:09.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2vdbk" for this suite.
Feb  3 12:10:15.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:10:15.303: INFO: namespace: e2e-tests-kubectl-2vdbk, resource: bindings, ignored listing per whitelist
Feb  3 12:10:15.399: INFO: namespace e2e-tests-kubectl-2vdbk deletion completed in 6.224056484s

• [SLOW TEST:12.637 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:10:15.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0203 12:10:16.901988       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 12:10:16.902: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:10:16.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-hsjv4" for this suite.
Feb  3 12:10:25.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:10:25.882: INFO: namespace: e2e-tests-gc-hsjv4, resource: bindings, ignored listing per whitelist
Feb  3 12:10:25.945: INFO: namespace e2e-tests-gc-hsjv4 deletion completed in 9.034374765s

• [SLOW TEST:10.545 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:10:25.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Feb  3 12:10:26.209: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:10:26.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-smnr6" for this suite.
Feb  3 12:10:32.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:10:33.001: INFO: namespace: e2e-tests-kubectl-smnr6, resource: bindings, ignored listing per whitelist
Feb  3 12:10:33.019: INFO: namespace e2e-tests-kubectl-smnr6 deletion completed in 6.560640391s

• [SLOW TEST:7.073 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:10:33.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  3 12:10:33.235: INFO: Waiting up to 5m0s for pod "downward-api-2d8b5001-467e-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-gbw8s" to be "success or failure"
Feb  3 12:10:33.249: INFO: Pod "downward-api-2d8b5001-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.213707ms
Feb  3 12:10:35.412: INFO: Pod "downward-api-2d8b5001-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176299173s
Feb  3 12:10:37.425: INFO: Pod "downward-api-2d8b5001-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189710233s
Feb  3 12:10:39.493: INFO: Pod "downward-api-2d8b5001-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257945984s
Feb  3 12:10:41.538: INFO: Pod "downward-api-2d8b5001-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.302265525s
Feb  3 12:10:43.550: INFO: Pod "downward-api-2d8b5001-467e-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.31427277s
STEP: Saw pod success
Feb  3 12:10:43.550: INFO: Pod "downward-api-2d8b5001-467e-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:10:43.555: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-2d8b5001-467e-11ea-ab15-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  3 12:10:44.153: INFO: Waiting for pod downward-api-2d8b5001-467e-11ea-ab15-0242ac110005 to disappear
Feb  3 12:10:44.176: INFO: Pod downward-api-2d8b5001-467e-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:10:44.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gbw8s" for this suite.
Feb  3 12:10:52.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:10:52.749: INFO: namespace: e2e-tests-downward-api-gbw8s, resource: bindings, ignored listing per whitelist
Feb  3 12:10:52.810: INFO: namespace e2e-tests-downward-api-gbw8s deletion completed in 8.624588734s

• [SLOW TEST:19.790 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:10:52.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-396439cd-467e-11ea-ab15-0242ac110005
STEP: Creating secret with name s-test-opt-upd-39643aa5-467e-11ea-ab15-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-396439cd-467e-11ea-ab15-0242ac110005
STEP: Updating secret s-test-opt-upd-39643aa5-467e-11ea-ab15-0242ac110005
STEP: Creating secret with name s-test-opt-create-39643ad8-467e-11ea-ab15-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:11:09.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hd6h5" for this suite.
Feb  3 12:11:33.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:11:34.193: INFO: namespace: e2e-tests-secrets-hd6h5, resource: bindings, ignored listing per whitelist
Feb  3 12:11:34.194: INFO: namespace e2e-tests-secrets-hd6h5 deletion completed in 24.467831996s

• [SLOW TEST:41.383 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:11:34.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb  3 12:11:34.400: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix128918340/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:11:34.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ksngc" for this suite.
Feb  3 12:11:40.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:11:40.818: INFO: namespace: e2e-tests-kubectl-ksngc, resource: bindings, ignored listing per whitelist
Feb  3 12:11:40.832: INFO: namespace e2e-tests-kubectl-ksngc deletion completed in 6.284213621s

• [SLOW TEST:6.638 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:11:40.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-55f2fbd3-467e-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  3 12:11:41.023: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-55f44d9d-467e-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-rxfrm" to be "success or failure"
Feb  3 12:11:41.039: INFO: Pod "pod-projected-secrets-55f44d9d-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.021852ms
Feb  3 12:11:43.114: INFO: Pod "pod-projected-secrets-55f44d9d-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090693994s
Feb  3 12:11:45.140: INFO: Pod "pod-projected-secrets-55f44d9d-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117049924s
Feb  3 12:11:47.637: INFO: Pod "pod-projected-secrets-55f44d9d-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.613338022s
Feb  3 12:11:49.647: INFO: Pod "pod-projected-secrets-55f44d9d-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.624108363s
Feb  3 12:11:51.665: INFO: Pod "pod-projected-secrets-55f44d9d-467e-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.641526599s
STEP: Saw pod success
Feb  3 12:11:51.665: INFO: Pod "pod-projected-secrets-55f44d9d-467e-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:11:51.673: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-55f44d9d-467e-11ea-ab15-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  3 12:11:52.658: INFO: Waiting for pod pod-projected-secrets-55f44d9d-467e-11ea-ab15-0242ac110005 to disappear
Feb  3 12:11:52.889: INFO: Pod pod-projected-secrets-55f44d9d-467e-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:11:52.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rxfrm" for this suite.
Feb  3 12:12:00.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:12:01.311: INFO: namespace: e2e-tests-projected-rxfrm, resource: bindings, ignored listing per whitelist
Feb  3 12:12:01.324: INFO: namespace e2e-tests-projected-rxfrm deletion completed in 8.415634727s

• [SLOW TEST:20.492 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:12:01.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  3 12:12:19.877: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 12:12:19.885: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 12:12:21.885: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 12:12:21.898: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 12:12:23.885: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 12:12:24.024: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 12:12:25.885: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 12:12:25.913: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 12:12:27.885: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 12:12:27.907: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 12:12:29.885: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 12:12:29.940: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 12:12:31.885: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 12:12:31.895: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 12:12:33.885: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 12:12:33.910: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:12:33.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-dh84z" for this suite.
Feb  3 12:12:58.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:12:58.188: INFO: namespace: e2e-tests-container-lifecycle-hook-dh84z, resource: bindings, ignored listing per whitelist
Feb  3 12:12:58.269: INFO: namespace e2e-tests-container-lifecycle-hook-dh84z deletion completed in 24.286122913s

• [SLOW TEST:56.944 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:12:58.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  3 12:12:58.704: INFO: Waiting up to 5m0s for pod "downwardapi-volume-843de924-467e-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-vv7g4" to be "success or failure"
Feb  3 12:12:58.719: INFO: Pod "downwardapi-volume-843de924-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.170991ms
Feb  3 12:13:00.997: INFO: Pod "downwardapi-volume-843de924-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292562941s
Feb  3 12:13:03.013: INFO: Pod "downwardapi-volume-843de924-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308807365s
Feb  3 12:13:06.494: INFO: Pod "downwardapi-volume-843de924-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.789759189s
Feb  3 12:13:08.532: INFO: Pod "downwardapi-volume-843de924-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.828092939s
Feb  3 12:13:10.565: INFO: Pod "downwardapi-volume-843de924-467e-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.861230794s
STEP: Saw pod success
Feb  3 12:13:10.566: INFO: Pod "downwardapi-volume-843de924-467e-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:13:10.582: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-843de924-467e-11ea-ab15-0242ac110005 container client-container: 
STEP: delete the pod
Feb  3 12:13:11.187: INFO: Waiting for pod downwardapi-volume-843de924-467e-11ea-ab15-0242ac110005 to disappear
Feb  3 12:13:11.611: INFO: Pod downwardapi-volume-843de924-467e-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:13:11.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vv7g4" for this suite.
Feb  3 12:13:17.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:13:18.023: INFO: namespace: e2e-tests-downward-api-vv7g4, resource: bindings, ignored listing per whitelist
Feb  3 12:13:18.051: INFO: namespace e2e-tests-downward-api-vv7g4 deletion completed in 6.403411684s

• [SLOW TEST:19.781 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:13:18.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  3 12:13:18.217: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-pzcgl,SelfLink:/api/v1/namespaces/e2e-tests-watch-pzcgl/configmaps/e2e-watch-test-resource-version,UID:8fe128a5-467e-11ea-a994-fa163e34d433,ResourceVersion:20415325,Generation:0,CreationTimestamp:2020-02-03 12:13:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 12:13:18.217: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-pzcgl,SelfLink:/api/v1/namespaces/e2e-tests-watch-pzcgl/configmaps/e2e-watch-test-resource-version,UID:8fe128a5-467e-11ea-a994-fa163e34d433,ResourceVersion:20415326,Generation:0,CreationTimestamp:2020-02-03 12:13:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:13:18.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-pzcgl" for this suite.
Feb  3 12:13:24.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:13:24.396: INFO: namespace: e2e-tests-watch-pzcgl, resource: bindings, ignored listing per whitelist
Feb  3 12:13:24.488: INFO: namespace e2e-tests-watch-pzcgl deletion completed in 6.266711623s

• [SLOW TEST:6.437 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:13:24.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  3 12:13:24.807: INFO: Waiting up to 5m0s for pod "downwardapi-volume-93d0ce94-467e-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-m7bhg" to be "success or failure"
Feb  3 12:13:24.885: INFO: Pod "downwardapi-volume-93d0ce94-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 77.343032ms
Feb  3 12:13:27.030: INFO: Pod "downwardapi-volume-93d0ce94-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222855463s
Feb  3 12:13:29.042: INFO: Pod "downwardapi-volume-93d0ce94-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234045126s
Feb  3 12:13:31.188: INFO: Pod "downwardapi-volume-93d0ce94-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.380124082s
Feb  3 12:13:33.214: INFO: Pod "downwardapi-volume-93d0ce94-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.406354502s
Feb  3 12:13:35.254: INFO: Pod "downwardapi-volume-93d0ce94-467e-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.446744659s
STEP: Saw pod success
Feb  3 12:13:35.254: INFO: Pod "downwardapi-volume-93d0ce94-467e-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:13:35.264: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-93d0ce94-467e-11ea-ab15-0242ac110005 container client-container: 
STEP: delete the pod
Feb  3 12:13:35.347: INFO: Waiting for pod downwardapi-volume-93d0ce94-467e-11ea-ab15-0242ac110005 to disappear
Feb  3 12:13:35.403: INFO: Pod downwardapi-volume-93d0ce94-467e-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:13:35.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m7bhg" for this suite.
Feb  3 12:13:41.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:13:41.536: INFO: namespace: e2e-tests-projected-m7bhg, resource: bindings, ignored listing per whitelist
Feb  3 12:13:41.610: INFO: namespace e2e-tests-projected-m7bhg deletion completed in 6.200286783s

• [SLOW TEST:17.121 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:13:41.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  3 12:13:41.805: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:14:04.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-qwvvd" for this suite.
Feb  3 12:14:28.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:14:28.726: INFO: namespace: e2e-tests-init-container-qwvvd, resource: bindings, ignored listing per whitelist
Feb  3 12:14:28.895: INFO: namespace e2e-tests-init-container-qwvvd deletion completed in 24.305904284s

• [SLOW TEST:47.284 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:14:28.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-vwfr
STEP: Creating a pod to test atomic-volume-subpath
Feb  3 12:14:29.149: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vwfr" in namespace "e2e-tests-subpath-njwfp" to be "success or failure"
Feb  3 12:14:29.244: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Pending", Reason="", readiness=false. Elapsed: 94.470879ms
Feb  3 12:14:31.506: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.357291783s
Feb  3 12:14:33.515: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.366085278s
Feb  3 12:14:35.752: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.602420419s
Feb  3 12:14:37.767: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.617383339s
Feb  3 12:14:39.790: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.640836961s
Feb  3 12:14:41.950: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.800745701s
Feb  3 12:14:43.968: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.818731527s
Feb  3 12:14:45.984: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Pending", Reason="", readiness=false. Elapsed: 16.835000887s
Feb  3 12:14:47.999: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Running", Reason="", readiness=false. Elapsed: 18.85010367s
Feb  3 12:14:50.013: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Running", Reason="", readiness=false. Elapsed: 20.863804297s
Feb  3 12:14:52.025: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Running", Reason="", readiness=false. Elapsed: 22.875610027s
Feb  3 12:14:54.085: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Running", Reason="", readiness=false. Elapsed: 24.936227673s
Feb  3 12:14:56.099: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Running", Reason="", readiness=false. Elapsed: 26.95009309s
Feb  3 12:14:58.117: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Running", Reason="", readiness=false. Elapsed: 28.967936466s
Feb  3 12:15:00.143: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Running", Reason="", readiness=false. Elapsed: 30.993935482s
Feb  3 12:15:02.185: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Running", Reason="", readiness=false. Elapsed: 33.036324276s
Feb  3 12:15:04.212: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Running", Reason="", readiness=false. Elapsed: 35.062514517s
Feb  3 12:15:06.300: INFO: Pod "pod-subpath-test-configmap-vwfr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.150495101s
STEP: Saw pod success
Feb  3 12:15:06.300: INFO: Pod "pod-subpath-test-configmap-vwfr" satisfied condition "success or failure"
Feb  3 12:15:06.317: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-vwfr container test-container-subpath-configmap-vwfr: 
STEP: delete the pod
Feb  3 12:15:06.743: INFO: Waiting for pod pod-subpath-test-configmap-vwfr to disappear
Feb  3 12:15:06.752: INFO: Pod pod-subpath-test-configmap-vwfr no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vwfr
Feb  3 12:15:06.752: INFO: Deleting pod "pod-subpath-test-configmap-vwfr" in namespace "e2e-tests-subpath-njwfp"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:15:06.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-njwfp" for this suite.
Feb  3 12:15:13.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:15:13.167: INFO: namespace: e2e-tests-subpath-njwfp, resource: bindings, ignored listing per whitelist
Feb  3 12:15:13.196: INFO: namespace e2e-tests-subpath-njwfp deletion completed in 6.249473541s

• [SLOW TEST:44.301 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:15:13.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-d4801698-467e-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  3 12:15:13.407: INFO: Waiting up to 5m0s for pod "pod-secrets-d480c28e-467e-11ea-ab15-0242ac110005" in namespace "e2e-tests-secrets-r9ht6" to be "success or failure"
Feb  3 12:15:13.433: INFO: Pod "pod-secrets-d480c28e-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.892446ms
Feb  3 12:15:15.448: INFO: Pod "pod-secrets-d480c28e-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04100264s
Feb  3 12:15:17.461: INFO: Pod "pod-secrets-d480c28e-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054444709s
Feb  3 12:15:19.483: INFO: Pod "pod-secrets-d480c28e-467e-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07593869s
Feb  3 12:15:21.690: INFO: Pod "pod-secrets-d480c28e-467e-11ea-ab15-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.283387099s
Feb  3 12:15:23.708: INFO: Pod "pod-secrets-d480c28e-467e-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.30121958s
STEP: Saw pod success
Feb  3 12:15:23.708: INFO: Pod "pod-secrets-d480c28e-467e-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:15:23.715: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d480c28e-467e-11ea-ab15-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  3 12:15:24.483: INFO: Waiting for pod pod-secrets-d480c28e-467e-11ea-ab15-0242ac110005 to disappear
Feb  3 12:15:24.752: INFO: Pod pod-secrets-d480c28e-467e-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:15:24.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-r9ht6" for this suite.
Feb  3 12:15:31.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:15:31.226: INFO: namespace: e2e-tests-secrets-r9ht6, resource: bindings, ignored listing per whitelist
Feb  3 12:15:31.255: INFO: namespace e2e-tests-secrets-r9ht6 deletion completed in 6.44361401s

• [SLOW TEST:18.058 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:15:31.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  3 12:15:31.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Feb  3 12:15:31.599: INFO: stderr: ""
Feb  3 12:15:31.599: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Feb  3 12:15:31.609: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:15:31.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p5tsk" for this suite.
Feb  3 12:15:39.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:15:39.865: INFO: namespace: e2e-tests-kubectl-p5tsk, resource: bindings, ignored listing per whitelist
Feb  3 12:15:39.891: INFO: namespace e2e-tests-kubectl-p5tsk deletion completed in 8.26778388s

S [SKIPPING] [8.636 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Feb  3 12:15:31.609: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:15:39.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Feb  3 12:15:52.444: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:16:18.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-btfgt" for this suite.
Feb  3 12:16:24.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:16:24.896: INFO: namespace: e2e-tests-namespaces-btfgt, resource: bindings, ignored listing per whitelist
Feb  3 12:16:24.968: INFO: namespace e2e-tests-namespaces-btfgt deletion completed in 6.262003213s
STEP: Destroying namespace "e2e-tests-nsdeletetest-h5gjr" for this suite.
Feb  3 12:16:24.971: INFO: Namespace e2e-tests-nsdeletetest-h5gjr was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-f4n4x" for this suite.
Feb  3 12:16:31.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:16:31.101: INFO: namespace: e2e-tests-nsdeletetest-f4n4x, resource: bindings, ignored listing per whitelist
Feb  3 12:16:31.178: INFO: namespace e2e-tests-nsdeletetest-f4n4x deletion completed in 6.206171063s

• [SLOW TEST:51.286 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:16:31.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  3 12:16:31.366: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.512017ms)
Feb  3 12:16:31.372: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.551027ms)
Feb  3 12:16:31.377: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.305062ms)
Feb  3 12:16:31.382: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.775753ms)
Feb  3 12:16:31.387: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.961957ms)
Feb  3 12:16:31.391: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.481156ms)
Feb  3 12:16:31.399: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.950961ms)
Feb  3 12:16:31.521: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 121.939007ms)
Feb  3 12:16:31.534: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.307559ms)
Feb  3 12:16:31.543: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.141388ms)
Feb  3 12:16:31.553: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.352024ms)
Feb  3 12:16:31.565: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.36313ms)
Feb  3 12:16:31.572: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.404528ms)
Feb  3 12:16:31.654: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 81.866342ms)
Feb  3 12:16:31.662: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.605874ms)
Feb  3 12:16:31.667: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.450174ms)
Feb  3 12:16:31.673: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.205979ms)
Feb  3 12:16:31.679: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.785377ms)
Feb  3 12:16:31.685: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.282547ms)
Feb  3 12:16:31.691: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.751069ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:16:31.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-7c8ld" for this suite.
Feb  3 12:16:37.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:16:37.888: INFO: namespace: e2e-tests-proxy-7c8ld, resource: bindings, ignored listing per whitelist
Feb  3 12:16:37.937: INFO: namespace e2e-tests-proxy-7c8ld deletion completed in 6.239912362s

• [SLOW TEST:6.759 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:16:37.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:16:50.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-7k9fk" for this suite.
Feb  3 12:16:56.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:16:57.023: INFO: namespace: e2e-tests-kubelet-test-7k9fk, resource: bindings, ignored listing per whitelist
Feb  3 12:16:57.181: INFO: namespace e2e-tests-kubelet-test-7k9fk deletion completed in 6.2461751s

• [SLOW TEST:19.244 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:16:57.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  3 12:16:57.564: INFO: Waiting up to 5m0s for pod "pod-129edb31-467f-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-5ngx6" to be "success or failure"
Feb  3 12:16:57.589: INFO: Pod "pod-129edb31-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.827571ms
Feb  3 12:16:59.602: INFO: Pod "pod-129edb31-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037787349s
Feb  3 12:17:01.625: INFO: Pod "pod-129edb31-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061249845s
Feb  3 12:17:03.802: INFO: Pod "pod-129edb31-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.238060418s
Feb  3 12:17:05.845: INFO: Pod "pod-129edb31-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.280672606s
Feb  3 12:17:07.886: INFO: Pod "pod-129edb31-467f-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.322109599s
STEP: Saw pod success
Feb  3 12:17:07.886: INFO: Pod "pod-129edb31-467f-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:17:07.915: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-129edb31-467f-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 12:17:08.110: INFO: Waiting for pod pod-129edb31-467f-11ea-ab15-0242ac110005 to disappear
Feb  3 12:17:08.122: INFO: Pod pod-129edb31-467f-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:17:08.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5ngx6" for this suite.
Feb  3 12:17:14.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:17:14.640: INFO: namespace: e2e-tests-emptydir-5ngx6, resource: bindings, ignored listing per whitelist
Feb  3 12:17:14.640: INFO: namespace e2e-tests-emptydir-5ngx6 deletion completed in 6.494424489s

• [SLOW TEST:17.459 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:17:14.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  3 12:17:14.927: INFO: Waiting up to 5m0s for pod "pod-1cf04a10-467f-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-kscdm" to be "success or failure"
Feb  3 12:17:14.942: INFO: Pod "pod-1cf04a10-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.106712ms
Feb  3 12:17:17.310: INFO: Pod "pod-1cf04a10-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383339219s
Feb  3 12:17:19.320: INFO: Pod "pod-1cf04a10-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393074783s
Feb  3 12:17:21.484: INFO: Pod "pod-1cf04a10-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.557510721s
Feb  3 12:17:23.532: INFO: Pod "pod-1cf04a10-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.60485755s
Feb  3 12:17:25.552: INFO: Pod "pod-1cf04a10-467f-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.624644482s
STEP: Saw pod success
Feb  3 12:17:25.552: INFO: Pod "pod-1cf04a10-467f-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:17:26.001: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1cf04a10-467f-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 12:17:26.511: INFO: Waiting for pod pod-1cf04a10-467f-11ea-ab15-0242ac110005 to disappear
Feb  3 12:17:26.698: INFO: Pod pod-1cf04a10-467f-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:17:26.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kscdm" for this suite.
Feb  3 12:17:32.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:17:32.956: INFO: namespace: e2e-tests-emptydir-kscdm, resource: bindings, ignored listing per whitelist
Feb  3 12:17:32.965: INFO: namespace e2e-tests-emptydir-kscdm deletion completed in 6.252211934s

• [SLOW TEST:18.324 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:17:32.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-27d06e8d-467f-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  3 12:17:33.171: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-27d914ad-467f-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-jf279" to be "success or failure"
Feb  3 12:17:33.195: INFO: Pod "pod-projected-configmaps-27d914ad-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.722425ms
Feb  3 12:17:35.334: INFO: Pod "pod-projected-configmaps-27d914ad-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162710337s
Feb  3 12:17:37.403: INFO: Pod "pod-projected-configmaps-27d914ad-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231316438s
Feb  3 12:17:39.412: INFO: Pod "pod-projected-configmaps-27d914ad-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.240786443s
Feb  3 12:17:41.494: INFO: Pod "pod-projected-configmaps-27d914ad-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.323015007s
Feb  3 12:17:43.727: INFO: Pod "pod-projected-configmaps-27d914ad-467f-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.555406022s
STEP: Saw pod success
Feb  3 12:17:43.727: INFO: Pod "pod-projected-configmaps-27d914ad-467f-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:17:43.753: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-27d914ad-467f-11ea-ab15-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 12:17:44.162: INFO: Waiting for pod pod-projected-configmaps-27d914ad-467f-11ea-ab15-0242ac110005 to disappear
Feb  3 12:17:44.179: INFO: Pod pod-projected-configmaps-27d914ad-467f-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:17:44.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jf279" for this suite.
Feb  3 12:17:50.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:17:50.327: INFO: namespace: e2e-tests-projected-jf279, resource: bindings, ignored listing per whitelist
Feb  3 12:17:50.404: INFO: namespace e2e-tests-projected-jf279 deletion completed in 6.210714239s

• [SLOW TEST:17.439 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:17:50.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  3 12:17:50.759: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3254dee3-467f-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-9c8qf" to be "success or failure"
Feb  3 12:17:50.865: INFO: Pod "downwardapi-volume-3254dee3-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 105.517702ms
Feb  3 12:17:52.901: INFO: Pod "downwardapi-volume-3254dee3-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141296012s
Feb  3 12:17:54.958: INFO: Pod "downwardapi-volume-3254dee3-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198816383s
Feb  3 12:17:57.005: INFO: Pod "downwardapi-volume-3254dee3-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.245869157s
Feb  3 12:17:59.036: INFO: Pod "downwardapi-volume-3254dee3-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.276722874s
Feb  3 12:18:01.060: INFO: Pod "downwardapi-volume-3254dee3-467f-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.300149851s
STEP: Saw pod success
Feb  3 12:18:01.060: INFO: Pod "downwardapi-volume-3254dee3-467f-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:18:01.091: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3254dee3-467f-11ea-ab15-0242ac110005 container client-container: 
STEP: delete the pod
Feb  3 12:18:01.448: INFO: Waiting for pod downwardapi-volume-3254dee3-467f-11ea-ab15-0242ac110005 to disappear
Feb  3 12:18:01.462: INFO: Pod downwardapi-volume-3254dee3-467f-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:18:01.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9c8qf" for this suite.
Feb  3 12:18:07.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:18:07.708: INFO: namespace: e2e-tests-projected-9c8qf, resource: bindings, ignored listing per whitelist
Feb  3 12:18:07.832: INFO: namespace e2e-tests-projected-9c8qf deletion completed in 6.350591327s

• [SLOW TEST:17.428 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:18:07.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Feb  3 12:18:18.213: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-3cb22e1a-467f-11ea-ab15-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-58hzq", SelfLink:"/api/v1/namespaces/e2e-tests-pods-58hzq/pods/pod-submit-remove-3cb22e1a-467f-11ea-ab15-0242ac110005", UID:"3cb392fb-467f-11ea-a994-fa163e34d433", ResourceVersion:"20416015", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716329088, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"132288979"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tx74v", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001098940), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tx74v", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00248fc88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002969980), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00248fcc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00248fce0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00248fce8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00248fcec)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716329088, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716329096, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716329096, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716329088, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001bd2180), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001bd21a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://72857504fbb02ef752aa8b1954579335eeb82f1ef22c7c5c5f43ea1dee422289"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:18:32.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-58hzq" for this suite.
Feb  3 12:18:39.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:18:39.138: INFO: namespace: e2e-tests-pods-58hzq, resource: bindings, ignored listing per whitelist
Feb  3 12:18:39.194: INFO: namespace e2e-tests-pods-58hzq deletion completed in 6.297202664s

• [SLOW TEST:31.361 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:18:39.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:18:39.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-v4s4h" for this suite.
Feb  3 12:19:03.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:19:03.804: INFO: namespace: e2e-tests-pods-v4s4h, resource: bindings, ignored listing per whitelist
Feb  3 12:19:03.955: INFO: namespace e2e-tests-pods-v4s4h deletion completed in 24.404320231s

• [SLOW TEST:24.762 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:19:03.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb  3 12:19:04.260: INFO: namespace e2e-tests-kubectl-hmplc
Feb  3 12:19:04.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hmplc'
Feb  3 12:19:06.693: INFO: stderr: ""
Feb  3 12:19:06.693: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  3 12:19:08.304: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 12:19:08.305: INFO: Found 0 / 1
Feb  3 12:19:08.730: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 12:19:08.731: INFO: Found 0 / 1
Feb  3 12:19:09.711: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 12:19:09.712: INFO: Found 0 / 1
Feb  3 12:19:10.713: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 12:19:10.713: INFO: Found 0 / 1
Feb  3 12:19:12.317: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 12:19:12.317: INFO: Found 0 / 1
Feb  3 12:19:12.721: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 12:19:12.721: INFO: Found 0 / 1
Feb  3 12:19:13.706: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 12:19:13.706: INFO: Found 0 / 1
Feb  3 12:19:14.726: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 12:19:14.726: INFO: Found 0 / 1
Feb  3 12:19:15.708: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 12:19:15.708: INFO: Found 0 / 1
Feb  3 12:19:16.712: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 12:19:16.712: INFO: Found 1 / 1
Feb  3 12:19:16.712: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  3 12:19:16.717: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 12:19:16.717: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  3 12:19:16.717: INFO: wait on redis-master startup in e2e-tests-kubectl-hmplc 
Feb  3 12:19:16.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mzqtl redis-master --namespace=e2e-tests-kubectl-hmplc'
Feb  3 12:19:16.876: INFO: stderr: ""
Feb  3 12:19:16.876: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 03 Feb 12:19:14.667 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 Feb 12:19:14.668 # Server started, Redis version 3.2.12\n1:M 03 Feb 12:19:14.668 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 Feb 12:19:14.668 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb  3 12:19:16.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-hmplc'
Feb  3 12:19:17.153: INFO: stderr: ""
Feb  3 12:19:17.153: INFO: stdout: "service/rm2 exposed\n"
Feb  3 12:19:17.205: INFO: Service rm2 in namespace e2e-tests-kubectl-hmplc found.
STEP: exposing service
Feb  3 12:19:19.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-hmplc'
Feb  3 12:19:19.589: INFO: stderr: ""
Feb  3 12:19:19.589: INFO: stdout: "service/rm3 exposed\n"
Feb  3 12:19:19.620: INFO: Service rm3 in namespace e2e-tests-kubectl-hmplc found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:19:21.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hmplc" for this suite.
Feb  3 12:19:45.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:19:45.819: INFO: namespace: e2e-tests-kubectl-hmplc, resource: bindings, ignored listing per whitelist
Feb  3 12:19:45.845: INFO: namespace e2e-tests-kubectl-hmplc deletion completed in 24.189978063s

• [SLOW TEST:41.889 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:19:45.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  3 12:20:02.055: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:20:03.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-6664g" for this suite.
Feb  3 12:20:28.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:20:28.105: INFO: namespace: e2e-tests-replicaset-6664g, resource: bindings, ignored listing per whitelist
Feb  3 12:20:28.245: INFO: namespace e2e-tests-replicaset-6664g deletion completed in 24.692996704s

• [SLOW TEST:42.399 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:20:28.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  3 12:20:29.107: INFO: Waiting up to 5m0s for pod "pod-90b5b4d9-467f-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-2dmn4" to be "success or failure"
Feb  3 12:20:29.122: INFO: Pod "pod-90b5b4d9-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.993161ms
Feb  3 12:20:31.535: INFO: Pod "pod-90b5b4d9-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.427964864s
Feb  3 12:20:33.569: INFO: Pod "pod-90b5b4d9-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.461374722s
Feb  3 12:20:35.753: INFO: Pod "pod-90b5b4d9-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.646126731s
Feb  3 12:20:37.890: INFO: Pod "pod-90b5b4d9-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.782868613s
Feb  3 12:20:39.971: INFO: Pod "pod-90b5b4d9-467f-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.863985151s
STEP: Saw pod success
Feb  3 12:20:39.971: INFO: Pod "pod-90b5b4d9-467f-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:20:40.095: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-90b5b4d9-467f-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 12:20:40.174: INFO: Waiting for pod pod-90b5b4d9-467f-11ea-ab15-0242ac110005 to disappear
Feb  3 12:20:40.193: INFO: Pod pod-90b5b4d9-467f-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:20:40.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2dmn4" for this suite.
Feb  3 12:20:46.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:20:46.497: INFO: namespace: e2e-tests-emptydir-2dmn4, resource: bindings, ignored listing per whitelist
Feb  3 12:20:46.571: INFO: namespace e2e-tests-emptydir-2dmn4 deletion completed in 6.252045537s

• [SLOW TEST:18.327 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:20:46.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  3 12:20:57.471: INFO: Successfully updated pod "pod-update-activedeadlineseconds-9b44827f-467f-11ea-ab15-0242ac110005"
Feb  3 12:20:57.472: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-9b44827f-467f-11ea-ab15-0242ac110005" in namespace "e2e-tests-pods-pqghr" to be "terminated due to deadline exceeded"
Feb  3 12:20:57.506: INFO: Pod "pod-update-activedeadlineseconds-9b44827f-467f-11ea-ab15-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 33.890078ms
Feb  3 12:20:59.611: INFO: Pod "pod-update-activedeadlineseconds-9b44827f-467f-11ea-ab15-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.138953727s
Feb  3 12:20:59.611: INFO: Pod "pod-update-activedeadlineseconds-9b44827f-467f-11ea-ab15-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:20:59.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pqghr" for this suite.
Feb  3 12:21:05.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:21:05.851: INFO: namespace: e2e-tests-pods-pqghr, resource: bindings, ignored listing per whitelist
Feb  3 12:21:05.881: INFO: namespace e2e-tests-pods-pqghr deletion completed in 6.25964074s

• [SLOW TEST:19.308 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:21:05.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Feb  3 12:21:06.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  3 12:21:06.264: INFO: stderr: ""
Feb  3 12:21:06.265: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:21:06.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-t92lq" for this suite.
Feb  3 12:21:12.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:21:12.382: INFO: namespace: e2e-tests-kubectl-t92lq, resource: bindings, ignored listing per whitelist
Feb  3 12:21:12.509: INFO: namespace e2e-tests-kubectl-t92lq deletion completed in 6.234870391s

• [SLOW TEST:6.628 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:21:12.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  3 12:21:12.820: INFO: Waiting up to 5m0s for pod "pod-aac4e4f6-467f-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-hj8nf" to be "success or failure"
Feb  3 12:21:12.828: INFO: Pod "pod-aac4e4f6-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.103905ms
Feb  3 12:21:15.135: INFO: Pod "pod-aac4e4f6-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31457733s
Feb  3 12:21:17.150: INFO: Pod "pod-aac4e4f6-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32919331s
Feb  3 12:21:20.156: INFO: Pod "pod-aac4e4f6-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.335119034s
Feb  3 12:21:22.236: INFO: Pod "pod-aac4e4f6-467f-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.415739345s
Feb  3 12:21:24.254: INFO: Pod "pod-aac4e4f6-467f-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.43333021s
STEP: Saw pod success
Feb  3 12:21:24.254: INFO: Pod "pod-aac4e4f6-467f-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:21:24.260: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-aac4e4f6-467f-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 12:21:24.419: INFO: Waiting for pod pod-aac4e4f6-467f-11ea-ab15-0242ac110005 to disappear
Feb  3 12:21:24.428: INFO: Pod pod-aac4e4f6-467f-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:21:24.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hj8nf" for this suite.
Feb  3 12:21:32.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:21:32.907: INFO: namespace: e2e-tests-emptydir-hj8nf, resource: bindings, ignored listing per whitelist
Feb  3 12:21:32.917: INFO: namespace e2e-tests-emptydir-hj8nf deletion completed in 8.480610876s

• [SLOW TEST:20.407 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:21:32.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-vk47r
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-vk47r
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-vk47r
Feb  3 12:21:33.224: INFO: Found 0 stateful pods, waiting for 1
Feb  3 12:21:43.329: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb  3 12:21:53.256: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  3 12:21:53.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vk47r ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 12:21:54.341: INFO: stderr: "I0203 12:21:53.521642    3288 log.go:172] (0xc000138580) (0xc000312e60) Create stream\nI0203 12:21:53.522033    3288 log.go:172] (0xc000138580) (0xc000312e60) Stream added, broadcasting: 1\nI0203 12:21:53.530846    3288 log.go:172] (0xc000138580) Reply frame received for 1\nI0203 12:21:53.530986    3288 log.go:172] (0xc000138580) (0xc000312fa0) Create stream\nI0203 12:21:53.531009    3288 log.go:172] (0xc000138580) (0xc000312fa0) Stream added, broadcasting: 3\nI0203 12:21:53.532678    3288 log.go:172] (0xc000138580) Reply frame received for 3\nI0203 12:21:53.532847    3288 log.go:172] (0xc000138580) (0xc0008a4500) Create stream\nI0203 12:21:53.532895    3288 log.go:172] (0xc000138580) (0xc0008a4500) Stream added, broadcasting: 5\nI0203 12:21:53.534834    3288 log.go:172] (0xc000138580) Reply frame received for 5\nI0203 12:21:53.698895    3288 log.go:172] (0xc000138580) Data frame received for 3\nI0203 12:21:53.699025    3288 log.go:172] (0xc000312fa0) (3) Data frame handling\nI0203 12:21:53.699073    3288 log.go:172] (0xc000312fa0) (3) Data frame sent\nI0203 12:21:54.304666    3288 log.go:172] (0xc000138580) Data frame received for 1\nI0203 12:21:54.304871    3288 log.go:172] (0xc000312e60) (1) Data frame handling\nI0203 12:21:54.304905    3288 log.go:172] (0xc000312e60) (1) Data frame sent\nI0203 12:21:54.304952    3288 log.go:172] (0xc000138580) (0xc000312e60) Stream removed, broadcasting: 1\nI0203 12:21:54.324351    3288 log.go:172] (0xc000138580) (0xc0008a4500) Stream removed, broadcasting: 5\nI0203 12:21:54.324994    3288 log.go:172] (0xc000138580) (0xc000312fa0) Stream removed, broadcasting: 3\nI0203 12:21:54.325293    3288 log.go:172] (0xc000138580) (0xc000312e60) Stream removed, broadcasting: 1\nI0203 12:21:54.325316    3288 log.go:172] (0xc000138580) (0xc000312fa0) Stream removed, broadcasting: 3\nI0203 12:21:54.325330    3288 log.go:172] (0xc000138580) (0xc0008a4500) Stream removed, broadcasting: 5\n"
Feb  3 12:21:54.342: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 12:21:54.342: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 12:21:54.382: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 12:21:54.382: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 12:21:54.565: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.9999963s
Feb  3 12:21:55.583: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.9507924s
Feb  3 12:21:56.638: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.932511612s
Feb  3 12:21:57.678: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.877857206s
Feb  3 12:21:58.700: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.837558049s
Feb  3 12:21:59.728: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.816121212s
Feb  3 12:22:00.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.787609438s
Feb  3 12:22:01.764: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.774572716s
Feb  3 12:22:02.872: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.751671522s
Feb  3 12:22:03.900: INFO: Verifying statefulset ss doesn't scale past 1 for another 643.715948ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-vk47r
Feb  3 12:22:04.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vk47r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 12:22:05.427: INFO: stderr: "I0203 12:22:05.128142    3309 log.go:172] (0xc000748370) (0xc00069b400) Create stream\nI0203 12:22:05.128646    3309 log.go:172] (0xc000748370) (0xc00069b400) Stream added, broadcasting: 1\nI0203 12:22:05.138785    3309 log.go:172] (0xc000748370) Reply frame received for 1\nI0203 12:22:05.138888    3309 log.go:172] (0xc000748370) (0xc00069b4a0) Create stream\nI0203 12:22:05.138908    3309 log.go:172] (0xc000748370) (0xc00069b4a0) Stream added, broadcasting: 3\nI0203 12:22:05.141616    3309 log.go:172] (0xc000748370) Reply frame received for 3\nI0203 12:22:05.141671    3309 log.go:172] (0xc000748370) (0xc00069b540) Create stream\nI0203 12:22:05.141685    3309 log.go:172] (0xc000748370) (0xc00069b540) Stream added, broadcasting: 5\nI0203 12:22:05.143132    3309 log.go:172] (0xc000748370) Reply frame received for 5\nI0203 12:22:05.249057    3309 log.go:172] (0xc000748370) Data frame received for 3\nI0203 12:22:05.249184    3309 log.go:172] (0xc00069b4a0) (3) Data frame handling\nI0203 12:22:05.249222    3309 log.go:172] (0xc00069b4a0) (3) Data frame sent\nI0203 12:22:05.406180    3309 log.go:172] (0xc000748370) Data frame received for 1\nI0203 12:22:05.406412    3309 log.go:172] (0xc000748370) (0xc00069b540) Stream removed, broadcasting: 5\nI0203 12:22:05.406513    3309 log.go:172] (0xc00069b400) (1) Data frame handling\nI0203 12:22:05.406542    3309 log.go:172] (0xc00069b400) (1) Data frame sent\nI0203 12:22:05.406653    3309 log.go:172] (0xc000748370) (0xc00069b4a0) Stream removed, broadcasting: 3\nI0203 12:22:05.406727    3309 log.go:172] (0xc000748370) (0xc00069b400) Stream removed, broadcasting: 1\nI0203 12:22:05.407021    3309 log.go:172] (0xc000748370) Go away received\nI0203 12:22:05.407721    3309 log.go:172] (0xc000748370) (0xc00069b400) Stream removed, broadcasting: 1\nI0203 12:22:05.407787    3309 log.go:172] (0xc000748370) (0xc00069b4a0) Stream removed, broadcasting: 3\nI0203 12:22:05.407847    3309 log.go:172] (0xc000748370) (0xc00069b540) Stream removed, broadcasting: 5\n"
Feb  3 12:22:05.427: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 12:22:05.427: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 12:22:05.450: INFO: Found 1 stateful pods, waiting for 3
Feb  3 12:22:15.471: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 12:22:15.471: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 12:22:15.471: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  3 12:22:25.477: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 12:22:25.477: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 12:22:25.477: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  3 12:22:25.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vk47r ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 12:22:26.122: INFO: stderr: "I0203 12:22:25.764651    3331 log.go:172] (0xc000722370) (0xc000750640) Create stream\nI0203 12:22:25.765012    3331 log.go:172] (0xc000722370) (0xc000750640) Stream added, broadcasting: 1\nI0203 12:22:25.773396    3331 log.go:172] (0xc000722370) Reply frame received for 1\nI0203 12:22:25.773511    3331 log.go:172] (0xc000722370) (0xc0007506e0) Create stream\nI0203 12:22:25.773524    3331 log.go:172] (0xc000722370) (0xc0007506e0) Stream added, broadcasting: 3\nI0203 12:22:25.775541    3331 log.go:172] (0xc000722370) Reply frame received for 3\nI0203 12:22:25.775588    3331 log.go:172] (0xc000722370) (0xc000750780) Create stream\nI0203 12:22:25.775611    3331 log.go:172] (0xc000722370) (0xc000750780) Stream added, broadcasting: 5\nI0203 12:22:25.777763    3331 log.go:172] (0xc000722370) Reply frame received for 5\nI0203 12:22:25.947775    3331 log.go:172] (0xc000722370) Data frame received for 3\nI0203 12:22:25.947947    3331 log.go:172] (0xc0007506e0) (3) Data frame handling\nI0203 12:22:25.947985    3331 log.go:172] (0xc0007506e0) (3) Data frame sent\nI0203 12:22:26.104892    3331 log.go:172] (0xc000722370) Data frame received for 1\nI0203 12:22:26.105056    3331 log.go:172] (0xc000722370) (0xc0007506e0) Stream removed, broadcasting: 3\nI0203 12:22:26.105174    3331 log.go:172] (0xc000750640) (1) Data frame handling\nI0203 12:22:26.105206    3331 log.go:172] (0xc000750640) (1) Data frame sent\nI0203 12:22:26.105336    3331 log.go:172] (0xc000722370) (0xc000750780) Stream removed, broadcasting: 5\nI0203 12:22:26.105391    3331 log.go:172] (0xc000722370) (0xc000750640) Stream removed, broadcasting: 1\nI0203 12:22:26.105406    3331 log.go:172] (0xc000722370) Go away received\nI0203 12:22:26.106905    3331 log.go:172] (0xc000722370) (0xc000750640) Stream removed, broadcasting: 1\nI0203 12:22:26.106989    3331 log.go:172] (0xc000722370) (0xc0007506e0) Stream removed, broadcasting: 3\nI0203 12:22:26.107046    3331 log.go:172] (0xc000722370) (0xc000750780) Stream removed, broadcasting: 5\n"
Feb  3 12:22:26.122: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 12:22:26.122: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 12:22:26.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vk47r ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 12:22:26.943: INFO: stderr: "I0203 12:22:26.305313    3353 log.go:172] (0xc00084a160) (0xc00064f540) Create stream\nI0203 12:22:26.305669    3353 log.go:172] (0xc00084a160) (0xc00064f540) Stream added, broadcasting: 1\nI0203 12:22:26.312000    3353 log.go:172] (0xc00084a160) Reply frame received for 1\nI0203 12:22:26.312065    3353 log.go:172] (0xc00084a160) (0xc00064f5e0) Create stream\nI0203 12:22:26.312071    3353 log.go:172] (0xc00084a160) (0xc00064f5e0) Stream added, broadcasting: 3\nI0203 12:22:26.313055    3353 log.go:172] (0xc00084a160) Reply frame received for 3\nI0203 12:22:26.313089    3353 log.go:172] (0xc00084a160) (0xc00029a000) Create stream\nI0203 12:22:26.313100    3353 log.go:172] (0xc00084a160) (0xc00029a000) Stream added, broadcasting: 5\nI0203 12:22:26.314513    3353 log.go:172] (0xc00084a160) Reply frame received for 5\nI0203 12:22:26.694293    3353 log.go:172] (0xc00084a160) Data frame received for 3\nI0203 12:22:26.694666    3353 log.go:172] (0xc00064f5e0) (3) Data frame handling\nI0203 12:22:26.694719    3353 log.go:172] (0xc00064f5e0) (3) Data frame sent\nI0203 12:22:26.924494    3353 log.go:172] (0xc00084a160) Data frame received for 1\nI0203 12:22:26.924936    3353 log.go:172] (0xc00084a160) (0xc00029a000) Stream removed, broadcasting: 5\nI0203 12:22:26.924988    3353 log.go:172] (0xc00064f540) (1) Data frame handling\nI0203 12:22:26.925002    3353 log.go:172] (0xc00064f540) (1) Data frame sent\nI0203 12:22:26.925041    3353 log.go:172] (0xc00084a160) (0xc00064f5e0) Stream removed, broadcasting: 3\nI0203 12:22:26.925064    3353 log.go:172] (0xc00084a160) (0xc00064f540) Stream removed, broadcasting: 1\nI0203 12:22:26.925081    3353 log.go:172] (0xc00084a160) Go away received\nI0203 12:22:26.926197    3353 log.go:172] (0xc00084a160) (0xc00064f540) Stream removed, broadcasting: 1\nI0203 12:22:26.926214    3353 log.go:172] (0xc00084a160) (0xc00064f5e0) Stream removed, broadcasting: 3\nI0203 12:22:26.926221    3353 log.go:172] (0xc00084a160) (0xc00029a000) Stream removed, broadcasting: 5\n"
Feb  3 12:22:26.943: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 12:22:26.943: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 12:22:26.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vk47r ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 12:22:27.532: INFO: stderr: "I0203 12:22:27.239233    3375 log.go:172] (0xc00013a840) (0xc000697360) Create stream\nI0203 12:22:27.239895    3375 log.go:172] (0xc00013a840) (0xc000697360) Stream added, broadcasting: 1\nI0203 12:22:27.244195    3375 log.go:172] (0xc00013a840) Reply frame received for 1\nI0203 12:22:27.244253    3375 log.go:172] (0xc00013a840) (0xc0006fa000) Create stream\nI0203 12:22:27.244263    3375 log.go:172] (0xc00013a840) (0xc0006fa000) Stream added, broadcasting: 3\nI0203 12:22:27.245269    3375 log.go:172] (0xc00013a840) Reply frame received for 3\nI0203 12:22:27.245308    3375 log.go:172] (0xc00013a840) (0xc0007d8000) Create stream\nI0203 12:22:27.245317    3375 log.go:172] (0xc00013a840) (0xc0007d8000) Stream added, broadcasting: 5\nI0203 12:22:27.246228    3375 log.go:172] (0xc00013a840) Reply frame received for 5\nI0203 12:22:27.404375    3375 log.go:172] (0xc00013a840) Data frame received for 3\nI0203 12:22:27.404482    3375 log.go:172] (0xc0006fa000) (3) Data frame handling\nI0203 12:22:27.404498    3375 log.go:172] (0xc0006fa000) (3) Data frame sent\nI0203 12:22:27.521563    3375 log.go:172] (0xc00013a840) (0xc0007d8000) Stream removed, broadcasting: 5\nI0203 12:22:27.521704    3375 log.go:172] (0xc00013a840) Data frame received for 1\nI0203 12:22:27.521750    3375 log.go:172] (0xc00013a840) (0xc0006fa000) Stream removed, broadcasting: 3\nI0203 12:22:27.521809    3375 log.go:172] (0xc000697360) (1) Data frame handling\nI0203 12:22:27.521823    3375 log.go:172] (0xc000697360) (1) Data frame sent\nI0203 12:22:27.521829    3375 log.go:172] (0xc00013a840) (0xc000697360) Stream removed, broadcasting: 1\nI0203 12:22:27.521845    3375 log.go:172] (0xc00013a840) Go away received\nI0203 12:22:27.522513    3375 log.go:172] (0xc00013a840) (0xc000697360) Stream removed, broadcasting: 1\nI0203 12:22:27.522528    3375 log.go:172] (0xc00013a840) (0xc0006fa000) Stream removed, broadcasting: 3\nI0203 12:22:27.522536    3375 log.go:172] (0xc00013a840) (0xc0007d8000) Stream removed, broadcasting: 5\n"
Feb  3 12:22:27.532: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 12:22:27.532: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 12:22:27.532: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 12:22:27.549: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  3 12:22:37.623: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 12:22:37.623: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 12:22:37.623: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 12:22:37.671: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999386s
Feb  3 12:22:38.707: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.973545253s
Feb  3 12:22:39.727: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.93746703s
Feb  3 12:22:40.814: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.918028897s
Feb  3 12:22:41.849: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.830341019s
Feb  3 12:22:42.939: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.795372672s
Feb  3 12:22:43.976: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.705544113s
Feb  3 12:22:44.990: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.668543478s
Feb  3 12:22:46.005: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.654662711s
Feb  3 12:22:47.033: INFO: Verifying statefulset ss doesn't scale past 3 for another 639.575083ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-vk47r
Feb  3 12:22:48.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vk47r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 12:22:48.958: INFO: stderr: "I0203 12:22:48.343865    3397 log.go:172] (0xc000782370) (0xc0006ef2c0) Create stream\nI0203 12:22:48.344355    3397 log.go:172] (0xc000782370) (0xc0006ef2c0) Stream added, broadcasting: 1\nI0203 12:22:48.357191    3397 log.go:172] (0xc000782370) Reply frame received for 1\nI0203 12:22:48.357254    3397 log.go:172] (0xc000782370) (0xc0006d6000) Create stream\nI0203 12:22:48.357270    3397 log.go:172] (0xc000782370) (0xc0006d6000) Stream added, broadcasting: 3\nI0203 12:22:48.360284    3397 log.go:172] (0xc000782370) Reply frame received for 3\nI0203 12:22:48.360340    3397 log.go:172] (0xc000782370) (0xc0005a6000) Create stream\nI0203 12:22:48.360358    3397 log.go:172] (0xc000782370) (0xc0005a6000) Stream added, broadcasting: 5\nI0203 12:22:48.361339    3397 log.go:172] (0xc000782370) Reply frame received for 5\nI0203 12:22:48.569232    3397 log.go:172] (0xc000782370) Data frame received for 3\nI0203 12:22:48.569465    3397 log.go:172] (0xc0006d6000) (3) Data frame handling\nI0203 12:22:48.569502    3397 log.go:172] (0xc0006d6000) (3) Data frame sent\nI0203 12:22:48.941108    3397 log.go:172] (0xc000782370) (0xc0005a6000) Stream removed, broadcasting: 5\nI0203 12:22:48.941247    3397 log.go:172] (0xc000782370) Data frame received for 1\nI0203 12:22:48.941378    3397 log.go:172] (0xc000782370) (0xc0006d6000) Stream removed, broadcasting: 3\nI0203 12:22:48.941449    3397 log.go:172] (0xc0006ef2c0) (1) Data frame handling\nI0203 12:22:48.941481    3397 log.go:172] (0xc0006ef2c0) (1) Data frame sent\nI0203 12:22:48.941493    3397 log.go:172] (0xc000782370) (0xc0006ef2c0) Stream removed, broadcasting: 1\nI0203 12:22:48.941522    3397 log.go:172] (0xc000782370) Go away received\nI0203 12:22:48.942904    3397 log.go:172] (0xc000782370) (0xc0006ef2c0) Stream removed, broadcasting: 1\nI0203 12:22:48.942918    3397 log.go:172] (0xc000782370) (0xc0006d6000) Stream removed, broadcasting: 3\nI0203 12:22:48.942924    3397 log.go:172] (0xc000782370) (0xc0005a6000) Stream removed, broadcasting: 5\n"
Feb  3 12:22:48.959: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 12:22:48.959: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 12:22:48.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vk47r ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 12:22:49.585: INFO: stderr: "I0203 12:22:49.259871    3419 log.go:172] (0xc00013a6e0) (0xc0005194a0) Create stream\nI0203 12:22:49.260088    3419 log.go:172] (0xc00013a6e0) (0xc0005194a0) Stream added, broadcasting: 1\nI0203 12:22:49.270409    3419 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0203 12:22:49.270457    3419 log.go:172] (0xc00013a6e0) (0xc0006ca000) Create stream\nI0203 12:22:49.270468    3419 log.go:172] (0xc00013a6e0) (0xc0006ca000) Stream added, broadcasting: 3\nI0203 12:22:49.274731    3419 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0203 12:22:49.274764    3419 log.go:172] (0xc00013a6e0) (0xc000519540) Create stream\nI0203 12:22:49.274774    3419 log.go:172] (0xc00013a6e0) (0xc000519540) Stream added, broadcasting: 5\nI0203 12:22:49.276415    3419 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0203 12:22:49.388377    3419 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0203 12:22:49.388516    3419 log.go:172] (0xc0006ca000) (3) Data frame handling\nI0203 12:22:49.388582    3419 log.go:172] (0xc0006ca000) (3) Data frame sent\nI0203 12:22:49.571236    3419 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0203 12:22:49.571343    3419 log.go:172] (0xc0005194a0) (1) Data frame handling\nI0203 12:22:49.571376    3419 log.go:172] (0xc0005194a0) (1) Data frame sent\nI0203 12:22:49.571408    3419 log.go:172] (0xc00013a6e0) (0xc0005194a0) Stream removed, broadcasting: 1\nI0203 12:22:49.571594    3419 log.go:172] (0xc00013a6e0) (0xc0006ca000) Stream removed, broadcasting: 3\nI0203 12:22:49.571695    3419 log.go:172] (0xc00013a6e0) (0xc000519540) Stream removed, broadcasting: 5\nI0203 12:22:49.571827    3419 log.go:172] (0xc00013a6e0) (0xc0005194a0) Stream removed, broadcasting: 1\nI0203 12:22:49.571849    3419 log.go:172] (0xc00013a6e0) (0xc0006ca000) Stream removed, broadcasting: 3\nI0203 12:22:49.571861    3419 log.go:172] (0xc00013a6e0) (0xc000519540) Stream removed, broadcasting: 5\nI0203 12:22:49.572160    3419 log.go:172] (0xc00013a6e0) Go away received\n"
Feb  3 12:22:49.585: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 12:22:49.585: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 12:22:49.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vk47r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 12:22:50.343: INFO: stderr: "I0203 12:22:49.892151    3440 log.go:172] (0xc0006ea370) (0xc000728640) Create stream\nI0203 12:22:49.892655    3440 log.go:172] (0xc0006ea370) (0xc000728640) Stream added, broadcasting: 1\nI0203 12:22:49.930809    3440 log.go:172] (0xc0006ea370) Reply frame received for 1\nI0203 12:22:49.931144    3440 log.go:172] (0xc0006ea370) (0xc0007286e0) Create stream\nI0203 12:22:49.931174    3440 log.go:172] (0xc0006ea370) (0xc0007286e0) Stream added, broadcasting: 3\nI0203 12:22:49.936007    3440 log.go:172] (0xc0006ea370) Reply frame received for 3\nI0203 12:22:49.936249    3440 log.go:172] (0xc0006ea370) (0xc0005d0c80) Create stream\nI0203 12:22:49.936287    3440 log.go:172] (0xc0006ea370) (0xc0005d0c80) Stream added, broadcasting: 5\nI0203 12:22:49.953252    3440 log.go:172] (0xc0006ea370) Reply frame received for 5\nI0203 12:22:50.153592    3440 log.go:172] (0xc0006ea370) Data frame received for 3\nI0203 12:22:50.153729    3440 log.go:172] (0xc0007286e0) (3) Data frame handling\nI0203 12:22:50.153755    3440 log.go:172] (0xc0007286e0) (3) Data frame sent\nI0203 12:22:50.333371    3440 log.go:172] (0xc0006ea370) (0xc0005d0c80) Stream removed, broadcasting: 5\nI0203 12:22:50.333697    3440 log.go:172] (0xc0006ea370) Data frame received for 1\nI0203 12:22:50.333756    3440 log.go:172] (0xc0006ea370) (0xc0007286e0) Stream removed, broadcasting: 3\nI0203 12:22:50.333794    3440 log.go:172] (0xc000728640) (1) Data frame handling\nI0203 12:22:50.333817    3440 log.go:172] (0xc000728640) (1) Data frame sent\nI0203 12:22:50.333829    3440 log.go:172] (0xc0006ea370) (0xc000728640) Stream removed, broadcasting: 1\nI0203 12:22:50.333850    3440 log.go:172] (0xc0006ea370) Go away received\nI0203 12:22:50.334576    3440 log.go:172] (0xc0006ea370) (0xc000728640) Stream removed, broadcasting: 1\nI0203 12:22:50.334672    3440 log.go:172] (0xc0006ea370) (0xc0007286e0) Stream removed, broadcasting: 3\nI0203 12:22:50.334731    3440 log.go:172] (0xc0006ea370) (0xc0005d0c80) Stream removed, broadcasting: 5\n"
Feb  3 12:22:50.344: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 12:22:50.344: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 12:22:50.344: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  3 12:23:10.522: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vk47r
Feb  3 12:23:10.555: INFO: Scaling statefulset ss to 0
Feb  3 12:23:10.622: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 12:23:10.638: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:23:10.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-vk47r" for this suite.
Feb  3 12:23:18.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:23:19.037: INFO: namespace: e2e-tests-statefulset-vk47r, resource: bindings, ignored listing per whitelist
Feb  3 12:23:19.095: INFO: namespace e2e-tests-statefulset-vk47r deletion completed in 8.346441439s

• [SLOW TEST:106.178 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:23:19.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-rsbmm
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb  3 12:23:19.382: INFO: Found 0 stateful pods, waiting for 3
Feb  3 12:23:29.398: INFO: Found 1 stateful pods, waiting for 3
Feb  3 12:23:39.398: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 12:23:39.398: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 12:23:39.398: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  3 12:23:49.406: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 12:23:49.406: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 12:23:49.406: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  3 12:23:49.468: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  3 12:23:59.777: INFO: Updating stateful set ss2
Feb  3 12:23:59.806: INFO: Waiting for Pod e2e-tests-statefulset-rsbmm/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 12:24:09.849: INFO: Waiting for Pod e2e-tests-statefulset-rsbmm/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  3 12:24:20.663: INFO: Found 2 stateful pods, waiting for 3
Feb  3 12:24:30.675: INFO: Found 2 stateful pods, waiting for 3
Feb  3 12:24:40.704: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 12:24:40.705: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 12:24:40.705: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  3 12:24:50.687: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 12:24:50.688: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 12:24:50.688: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  3 12:24:50.744: INFO: Updating stateful set ss2
Feb  3 12:24:50.806: INFO: Waiting for Pod e2e-tests-statefulset-rsbmm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 12:25:00.858: INFO: Updating stateful set ss2
Feb  3 12:25:00.954: INFO: Waiting for StatefulSet e2e-tests-statefulset-rsbmm/ss2 to complete update
Feb  3 12:25:00.955: INFO: Waiting for Pod e2e-tests-statefulset-rsbmm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 12:25:10.984: INFO: Waiting for StatefulSet e2e-tests-statefulset-rsbmm/ss2 to complete update
Feb  3 12:25:10.984: INFO: Waiting for Pod e2e-tests-statefulset-rsbmm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 12:25:20.980: INFO: Waiting for StatefulSet e2e-tests-statefulset-rsbmm/ss2 to complete update
Feb  3 12:25:20.980: INFO: Waiting for Pod e2e-tests-statefulset-rsbmm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 12:25:30.990: INFO: Waiting for StatefulSet e2e-tests-statefulset-rsbmm/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  3 12:25:41.002: INFO: Deleting all statefulset in ns e2e-tests-statefulset-rsbmm
Feb  3 12:25:41.015: INFO: Scaling statefulset ss2 to 0
Feb  3 12:26:11.152: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 12:26:11.223: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:26:11.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-rsbmm" for this suite.
Feb  3 12:26:19.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:26:19.622: INFO: namespace: e2e-tests-statefulset-rsbmm, resource: bindings, ignored listing per whitelist
Feb  3 12:26:19.664: INFO: namespace e2e-tests-statefulset-rsbmm deletion completed in 8.328575971s

• [SLOW TEST:180.568 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:26:19.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Feb  3 12:26:19.981: INFO: Waiting up to 5m0s for pod "client-containers-61d0c802-4680-11ea-ab15-0242ac110005" in namespace "e2e-tests-containers-dbqgf" to be "success or failure"
Feb  3 12:26:20.011: INFO: Pod "client-containers-61d0c802-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.283332ms
Feb  3 12:26:22.027: INFO: Pod "client-containers-61d0c802-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045973769s
Feb  3 12:26:24.041: INFO: Pod "client-containers-61d0c802-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05992918s
Feb  3 12:26:26.104: INFO: Pod "client-containers-61d0c802-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1232706s
Feb  3 12:26:28.151: INFO: Pod "client-containers-61d0c802-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17015966s
Feb  3 12:26:30.164: INFO: Pod "client-containers-61d0c802-4680-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.182704527s
STEP: Saw pod success
Feb  3 12:26:30.164: INFO: Pod "client-containers-61d0c802-4680-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:26:30.174: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-61d0c802-4680-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 12:26:30.916: INFO: Waiting for pod client-containers-61d0c802-4680-11ea-ab15-0242ac110005 to disappear
Feb  3 12:26:31.189: INFO: Pod client-containers-61d0c802-4680-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:26:31.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-dbqgf" for this suite.
Feb  3 12:26:39.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:26:39.327: INFO: namespace: e2e-tests-containers-dbqgf, resource: bindings, ignored listing per whitelist
Feb  3 12:26:39.530: INFO: namespace e2e-tests-containers-dbqgf deletion completed in 8.311737058s

• [SLOW TEST:19.866 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:26:39.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  3 12:26:39.760: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-bcp85,SelfLink:/api/v1/namespaces/e2e-tests-watch-bcp85/configmaps/e2e-watch-test-watch-closed,UID:6da2a8c7-4680-11ea-a994-fa163e34d433,ResourceVersion:20417290,Generation:0,CreationTimestamp:2020-02-03 12:26:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 12:26:39.760: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-bcp85,SelfLink:/api/v1/namespaces/e2e-tests-watch-bcp85/configmaps/e2e-watch-test-watch-closed,UID:6da2a8c7-4680-11ea-a994-fa163e34d433,ResourceVersion:20417291,Generation:0,CreationTimestamp:2020-02-03 12:26:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  3 12:26:39.777: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-bcp85,SelfLink:/api/v1/namespaces/e2e-tests-watch-bcp85/configmaps/e2e-watch-test-watch-closed,UID:6da2a8c7-4680-11ea-a994-fa163e34d433,ResourceVersion:20417292,Generation:0,CreationTimestamp:2020-02-03 12:26:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 12:26:39.778: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-bcp85,SelfLink:/api/v1/namespaces/e2e-tests-watch-bcp85/configmaps/e2e-watch-test-watch-closed,UID:6da2a8c7-4680-11ea-a994-fa163e34d433,ResourceVersion:20417293,Generation:0,CreationTimestamp:2020-02-03 12:26:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:26:39.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-bcp85" for this suite.
Feb  3 12:26:45.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:26:45.953: INFO: namespace: e2e-tests-watch-bcp85, resource: bindings, ignored listing per whitelist
Feb  3 12:26:46.013: INFO: namespace e2e-tests-watch-bcp85 deletion completed in 6.227580537s

• [SLOW TEST:6.482 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:26:46.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-717ffa1e-4680-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  3 12:26:46.246: INFO: Waiting up to 5m0s for pod "pod-secrets-7180902b-4680-11ea-ab15-0242ac110005" in namespace "e2e-tests-secrets-ltdvk" to be "success or failure"
Feb  3 12:26:46.265: INFO: Pod "pod-secrets-7180902b-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.118776ms
Feb  3 12:26:48.576: INFO: Pod "pod-secrets-7180902b-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329890132s
Feb  3 12:26:50.615: INFO: Pod "pod-secrets-7180902b-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369323628s
Feb  3 12:26:52.728: INFO: Pod "pod-secrets-7180902b-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.482285016s
Feb  3 12:26:54.750: INFO: Pod "pod-secrets-7180902b-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.504112312s
Feb  3 12:26:57.358: INFO: Pod "pod-secrets-7180902b-4680-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.112232517s
STEP: Saw pod success
Feb  3 12:26:57.358: INFO: Pod "pod-secrets-7180902b-4680-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:26:57.368: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7180902b-4680-11ea-ab15-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  3 12:26:57.558: INFO: Waiting for pod pod-secrets-7180902b-4680-11ea-ab15-0242ac110005 to disappear
Feb  3 12:26:57.567: INFO: Pod pod-secrets-7180902b-4680-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:26:57.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-ltdvk" for this suite.
Feb  3 12:27:03.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:27:03.719: INFO: namespace: e2e-tests-secrets-ltdvk, resource: bindings, ignored listing per whitelist
Feb  3 12:27:03.762: INFO: namespace e2e-tests-secrets-ltdvk deletion completed in 6.185532649s

• [SLOW TEST:17.747 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:27:03.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  3 12:27:04.022: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:27:05.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-jtmbk" for this suite.
Feb  3 12:27:11.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:27:11.579: INFO: namespace: e2e-tests-custom-resource-definition-jtmbk, resource: bindings, ignored listing per whitelist
Feb  3 12:27:11.622: INFO: namespace e2e-tests-custom-resource-definition-jtmbk deletion completed in 6.27255995s

• [SLOW TEST:7.859 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:27:11.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:28:11.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-jgw9r" for this suite.
Feb  3 12:28:35.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:28:35.900: INFO: namespace: e2e-tests-container-probe-jgw9r, resource: bindings, ignored listing per whitelist
Feb  3 12:28:36.004: INFO: namespace e2e-tests-container-probe-jgw9r deletion completed in 24.169163712s

• [SLOW TEST:84.382 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:28:36.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  3 12:28:36.190: INFO: Waiting up to 5m0s for pod "downward-api-b2ffdb69-4680-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-g8p87" to be "success or failure"
Feb  3 12:28:36.202: INFO: Pod "downward-api-b2ffdb69-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.019114ms
Feb  3 12:28:38.514: INFO: Pod "downward-api-b2ffdb69-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324067291s
Feb  3 12:28:40.565: INFO: Pod "downward-api-b2ffdb69-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374657795s
Feb  3 12:28:42.603: INFO: Pod "downward-api-b2ffdb69-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412513985s
Feb  3 12:28:44.732: INFO: Pod "downward-api-b2ffdb69-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541501272s
Feb  3 12:28:46.905: INFO: Pod "downward-api-b2ffdb69-4680-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.714628258s
STEP: Saw pod success
Feb  3 12:28:46.905: INFO: Pod "downward-api-b2ffdb69-4680-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:28:46.958: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-b2ffdb69-4680-11ea-ab15-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  3 12:28:47.218: INFO: Waiting for pod downward-api-b2ffdb69-4680-11ea-ab15-0242ac110005 to disappear
Feb  3 12:28:47.234: INFO: Pod downward-api-b2ffdb69-4680-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:28:47.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-g8p87" for this suite.
Feb  3 12:28:53.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:28:53.434: INFO: namespace: e2e-tests-downward-api-g8p87, resource: bindings, ignored listing per whitelist
Feb  3 12:28:53.458: INFO: namespace e2e-tests-downward-api-g8p87 deletion completed in 6.212742001s

• [SLOW TEST:17.453 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:28:53.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  3 12:28:53.704: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb  3 12:28:58.830: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  3 12:29:03.557: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  3 12:29:03.735: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-hq2mq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hq2mq/deployments/test-cleanup-deployment,UID:c35de950-4680-11ea-a994-fa163e34d433,ResourceVersion:20417581,Generation:1,CreationTimestamp:2020-02-03 12:29:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb  3 12:29:03.817: INFO: New ReplicaSet "test-cleanup-deployment-6df768c57" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-6df768c57,GenerateName:,Namespace:e2e-tests-deployment-hq2mq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hq2mq/replicasets/test-cleanup-deployment-6df768c57,UID:c37d9a4d-4680-11ea-a994-fa163e34d433,ResourceVersion:20417584,Generation:1,CreationTimestamp:2020-02-03 12:29:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c35de950-4680-11ea-a994-fa163e34d433 0xc0020c6f80 0xc0020c6f81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  3 12:29:03.817: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb  3 12:29:03.818: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-hq2mq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hq2mq/replicasets/test-cleanup-controller,UID:bd77091b-4680-11ea-a994-fa163e34d433,ResourceVersion:20417583,Generation:1,CreationTimestamp:2020-02-03 12:28:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c35de950-4680-11ea-a994-fa163e34d433 0xc0020c6e4f 0xc0020c6e60}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  3 12:29:03.945: INFO: Pod "test-cleanup-controller-qztbr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-qztbr,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-hq2mq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hq2mq/pods/test-cleanup-controller-qztbr,UID:bd83116b-4680-11ea-a994-fa163e34d433,ResourceVersion:20417579,Generation:0,CreationTimestamp:2020-02-03 12:28:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller bd77091b-4680-11ea-a994-fa163e34d433 0xc0021b4257 0xc0021b4258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ffhl8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ffhl8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ffhl8 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021b4720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021b4740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 12:28:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 12:29:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 12:29:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 12:28:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-03 12:28:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 12:29:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8da37d811df8a9e20d4e7c027c156a1c73a6afc84032d290b99ae3523a30c898}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:29:03.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-hq2mq" for this suite.
Feb  3 12:29:16.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:29:16.419: INFO: namespace: e2e-tests-deployment-hq2mq, resource: bindings, ignored listing per whitelist
Feb  3 12:29:16.435: INFO: namespace e2e-tests-deployment-hq2mq deletion completed in 12.448228417s

• [SLOW TEST:22.977 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:29:16.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-vfm9j A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-vfm9j;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-vfm9j A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-vfm9j;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-vfm9j.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-vfm9j.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-vfm9j.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-vfm9j.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-vfm9j.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 199.94.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.94.199_udp@PTR;check="$$(dig +tcp +noall +answer +search 199.94.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.94.199_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-vfm9j A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-vfm9j;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-vfm9j A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-vfm9j;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-vfm9j.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-vfm9j.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-vfm9j.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-vfm9j.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-vfm9j.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 199.94.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.94.199_udp@PTR;check="$$(dig +tcp +noall +answer +search 199.94.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.94.199_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 12:29:31.219: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.235: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.253: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-vfm9j from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.266: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-vfm9j from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.273: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-vfm9j.svc from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.326: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-vfm9j.svc from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.334: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.339: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.344: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.348: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.379: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.389: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.406: INFO: Unable to read 10.109.94.199_udp@PTR from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.410: INFO: Unable to read 10.109.94.199_tcp@PTR from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.414: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.418: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.421: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-vfm9j from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.426: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-vfm9j from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.431: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-vfm9j.svc from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.436: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-vfm9j.svc from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.439: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.443: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.446: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.450: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.453: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.457: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.460: INFO: Unable to read 10.109.94.199_udp@PTR from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.463: INFO: Unable to read 10.109.94.199_tcp@PTR from pod e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005: the server could not find the requested resource (get pods dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005)
Feb  3 12:29:31.463: INFO: Lookups using e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-vfm9j wheezy_tcp@dns-test-service.e2e-tests-dns-vfm9j wheezy_udp@dns-test-service.e2e-tests-dns-vfm9j.svc wheezy_tcp@dns-test-service.e2e-tests-dns-vfm9j.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.109.94.199_udp@PTR 10.109.94.199_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-vfm9j jessie_tcp@dns-test-service.e2e-tests-dns-vfm9j jessie_udp@dns-test-service.e2e-tests-dns-vfm9j.svc jessie_tcp@dns-test-service.e2e-tests-dns-vfm9j.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-vfm9j.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-vfm9j.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.109.94.199_udp@PTR 10.109.94.199_tcp@PTR]

Feb  3 12:29:36.841: INFO: DNS probes using e2e-tests-dns-vfm9j/dns-test-cb6dcc2d-4680-11ea-ab15-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:29:37.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-vfm9j" for this suite.
Feb  3 12:29:45.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:29:45.811: INFO: namespace: e2e-tests-dns-vfm9j, resource: bindings, ignored listing per whitelist
Feb  3 12:29:46.424: INFO: namespace e2e-tests-dns-vfm9j deletion completed in 9.13946742s

• [SLOW TEST:29.989 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:29:46.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-dd15afc7-4680-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  3 12:29:46.747: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd18582f-4680-11ea-ab15-0242ac110005" in namespace "e2e-tests-configmap-mnltt" to be "success or failure"
Feb  3 12:29:46.829: INFO: Pod "pod-configmaps-dd18582f-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 81.851187ms
Feb  3 12:29:48.858: INFO: Pod "pod-configmaps-dd18582f-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111458713s
Feb  3 12:29:51.778: INFO: Pod "pod-configmaps-dd18582f-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.031724218s
Feb  3 12:29:53.816: INFO: Pod "pod-configmaps-dd18582f-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.069226306s
Feb  3 12:29:55.840: INFO: Pod "pod-configmaps-dd18582f-4680-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.092950393s
STEP: Saw pod success
Feb  3 12:29:55.840: INFO: Pod "pod-configmaps-dd18582f-4680-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:29:55.848: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-dd18582f-4680-11ea-ab15-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  3 12:29:56.004: INFO: Waiting for pod pod-configmaps-dd18582f-4680-11ea-ab15-0242ac110005 to disappear
Feb  3 12:29:56.025: INFO: Pod pod-configmaps-dd18582f-4680-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:29:56.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mnltt" for this suite.
Feb  3 12:30:02.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:30:02.327: INFO: namespace: e2e-tests-configmap-mnltt, resource: bindings, ignored listing per whitelist
Feb  3 12:30:02.371: INFO: namespace e2e-tests-configmap-mnltt deletion completed in 6.237291213s

• [SLOW TEST:15.947 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:30:02.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-4lc7v
Feb  3 12:30:14.911: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-4lc7v
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 12:30:14.915: INFO: Initial restart count of pod liveness-http is 0
Feb  3 12:30:35.113: INFO: Restart count of pod e2e-tests-container-probe-4lc7v/liveness-http is now 1 (20.198083058s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:30:35.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-4lc7v" for this suite.
Feb  3 12:30:41.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:30:41.467: INFO: namespace: e2e-tests-container-probe-4lc7v, resource: bindings, ignored listing per whitelist
Feb  3 12:30:41.515: INFO: namespace e2e-tests-container-probe-4lc7v deletion completed in 6.235456694s

• [SLOW TEST:39.144 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:30:41.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  3 12:30:42.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe1226af-4680-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-k79kx" to be "success or failure"
Feb  3 12:30:42.313: INFO: Pod "downwardapi-volume-fe1226af-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 212.253271ms
Feb  3 12:30:44.332: INFO: Pod "downwardapi-volume-fe1226af-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231525996s
Feb  3 12:30:46.355: INFO: Pod "downwardapi-volume-fe1226af-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254074995s
Feb  3 12:30:48.381: INFO: Pod "downwardapi-volume-fe1226af-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280174136s
Feb  3 12:30:50.408: INFO: Pod "downwardapi-volume-fe1226af-4680-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.3074837s
Feb  3 12:30:52.420: INFO: Pod "downwardapi-volume-fe1226af-4680-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.319479073s
STEP: Saw pod success
Feb  3 12:30:52.420: INFO: Pod "downwardapi-volume-fe1226af-4680-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:30:52.433: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fe1226af-4680-11ea-ab15-0242ac110005 container client-container: 
STEP: delete the pod
Feb  3 12:30:52.634: INFO: Waiting for pod downwardapi-volume-fe1226af-4680-11ea-ab15-0242ac110005 to disappear
Feb  3 12:30:52.793: INFO: Pod downwardapi-volume-fe1226af-4680-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:30:52.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k79kx" for this suite.
Feb  3 12:30:58.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:30:59.140: INFO: namespace: e2e-tests-projected-k79kx, resource: bindings, ignored listing per whitelist
Feb  3 12:30:59.146: INFO: namespace e2e-tests-projected-k79kx deletion completed in 6.336763996s

• [SLOW TEST:17.630 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:30:59.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-lh72t
Feb  3 12:31:09.437: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-lh72t
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 12:31:09.443: INFO: Initial restart count of pod liveness-exec is 0
Feb  3 12:32:01.971: INFO: Restart count of pod e2e-tests-container-probe-lh72t/liveness-exec is now 1 (52.528307068s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:32:02.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-lh72t" for this suite.
Feb  3 12:32:10.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:32:10.606: INFO: namespace: e2e-tests-container-probe-lh72t, resource: bindings, ignored listing per whitelist
Feb  3 12:32:10.651: INFO: namespace e2e-tests-container-probe-lh72t deletion completed in 8.527802484s

• [SLOW TEST:71.505 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:32:10.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  3 12:32:10.897: INFO: Waiting up to 5m0s for pod "downwardapi-volume-330261dd-4681-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-26kvc" to be "success or failure"
Feb  3 12:32:10.920: INFO: Pod "downwardapi-volume-330261dd-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.327561ms
Feb  3 12:32:12.941: INFO: Pod "downwardapi-volume-330261dd-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043980257s
Feb  3 12:32:14.961: INFO: Pod "downwardapi-volume-330261dd-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064187052s
Feb  3 12:32:16.975: INFO: Pod "downwardapi-volume-330261dd-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078114102s
Feb  3 12:32:18.999: INFO: Pod "downwardapi-volume-330261dd-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101483901s
Feb  3 12:32:21.011: INFO: Pod "downwardapi-volume-330261dd-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114209942s
Feb  3 12:32:23.049: INFO: Pod "downwardapi-volume-330261dd-4681-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.151687998s
STEP: Saw pod success
Feb  3 12:32:23.049: INFO: Pod "downwardapi-volume-330261dd-4681-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:32:23.061: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-330261dd-4681-11ea-ab15-0242ac110005 container client-container: 
STEP: delete the pod
Feb  3 12:32:23.260: INFO: Waiting for pod downwardapi-volume-330261dd-4681-11ea-ab15-0242ac110005 to disappear
Feb  3 12:32:23.300: INFO: Pod downwardapi-volume-330261dd-4681-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:32:23.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-26kvc" for this suite.
Feb  3 12:32:29.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:32:29.587: INFO: namespace: e2e-tests-projected-26kvc, resource: bindings, ignored listing per whitelist
Feb  3 12:32:29.716: INFO: namespace e2e-tests-projected-26kvc deletion completed in 6.38825974s

• [SLOW TEST:19.064 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:32:29.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Feb  3 12:32:29.986: INFO: Waiting up to 5m0s for pod "var-expansion-3e6472f3-4681-11ea-ab15-0242ac110005" in namespace "e2e-tests-var-expansion-mx6kj" to be "success or failure"
Feb  3 12:32:30.108: INFO: Pod "var-expansion-3e6472f3-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 122.710151ms
Feb  3 12:32:32.164: INFO: Pod "var-expansion-3e6472f3-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178799019s
Feb  3 12:32:34.198: INFO: Pod "var-expansion-3e6472f3-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21268081s
Feb  3 12:32:36.457: INFO: Pod "var-expansion-3e6472f3-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.471174321s
Feb  3 12:32:38.487: INFO: Pod "var-expansion-3e6472f3-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.501006533s
Feb  3 12:32:40.542: INFO: Pod "var-expansion-3e6472f3-4681-11ea-ab15-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.555889869s
Feb  3 12:32:42.830: INFO: Pod "var-expansion-3e6472f3-4681-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.844256668s
STEP: Saw pod success
Feb  3 12:32:42.830: INFO: Pod "var-expansion-3e6472f3-4681-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:32:42.837: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-3e6472f3-4681-11ea-ab15-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  3 12:32:42.912: INFO: Waiting for pod var-expansion-3e6472f3-4681-11ea-ab15-0242ac110005 to disappear
Feb  3 12:32:43.001: INFO: Pod var-expansion-3e6472f3-4681-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:32:43.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-mx6kj" for this suite.
Feb  3 12:32:49.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:32:49.248: INFO: namespace: e2e-tests-var-expansion-mx6kj, resource: bindings, ignored listing per whitelist
Feb  3 12:32:49.254: INFO: namespace e2e-tests-var-expansion-mx6kj deletion completed in 6.230359989s

• [SLOW TEST:19.537 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:32:49.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  3 12:32:49.488: INFO: Waiting up to 5m0s for pod "pod-49f49fc4-4681-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-h7w75" to be "success or failure"
Feb  3 12:32:49.504: INFO: Pod "pod-49f49fc4-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.391789ms
Feb  3 12:32:51.519: INFO: Pod "pod-49f49fc4-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030321488s
Feb  3 12:32:53.545: INFO: Pod "pod-49f49fc4-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056939111s
Feb  3 12:32:55.802: INFO: Pod "pod-49f49fc4-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.313588632s
Feb  3 12:32:57.825: INFO: Pod "pod-49f49fc4-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.336219448s
Feb  3 12:33:00.373: INFO: Pod "pod-49f49fc4-4681-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.884774626s
STEP: Saw pod success
Feb  3 12:33:00.373: INFO: Pod "pod-49f49fc4-4681-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:33:00.387: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-49f49fc4-4681-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 12:33:00.795: INFO: Waiting for pod pod-49f49fc4-4681-11ea-ab15-0242ac110005 to disappear
Feb  3 12:33:00.813: INFO: Pod pod-49f49fc4-4681-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:33:00.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-h7w75" for this suite.
Feb  3 12:33:06.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:33:07.021: INFO: namespace: e2e-tests-emptydir-h7w75, resource: bindings, ignored listing per whitelist
Feb  3 12:33:07.029: INFO: namespace e2e-tests-emptydir-h7w75 deletion completed in 6.192966095s

• [SLOW TEST:17.774 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:33:07.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  3 12:33:07.359: INFO: Number of nodes with available pods: 0
Feb  3 12:33:07.359: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:08.383: INFO: Number of nodes with available pods: 0
Feb  3 12:33:08.383: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:09.395: INFO: Number of nodes with available pods: 0
Feb  3 12:33:09.395: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:10.396: INFO: Number of nodes with available pods: 0
Feb  3 12:33:10.396: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:11.433: INFO: Number of nodes with available pods: 0
Feb  3 12:33:11.433: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:12.384: INFO: Number of nodes with available pods: 0
Feb  3 12:33:12.384: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:14.326: INFO: Number of nodes with available pods: 0
Feb  3 12:33:14.326: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:14.379: INFO: Number of nodes with available pods: 0
Feb  3 12:33:14.379: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:15.652: INFO: Number of nodes with available pods: 0
Feb  3 12:33:15.652: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:16.413: INFO: Number of nodes with available pods: 0
Feb  3 12:33:16.413: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:17.392: INFO: Number of nodes with available pods: 0
Feb  3 12:33:17.392: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:18.390: INFO: Number of nodes with available pods: 1
Feb  3 12:33:18.390: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  3 12:33:18.447: INFO: Number of nodes with available pods: 0
Feb  3 12:33:18.447: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:19.479: INFO: Number of nodes with available pods: 0
Feb  3 12:33:19.479: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:20.475: INFO: Number of nodes with available pods: 0
Feb  3 12:33:20.475: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:21.515: INFO: Number of nodes with available pods: 0
Feb  3 12:33:21.515: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:22.961: INFO: Number of nodes with available pods: 0
Feb  3 12:33:22.961: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:23.473: INFO: Number of nodes with available pods: 0
Feb  3 12:33:23.473: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:24.489: INFO: Number of nodes with available pods: 0
Feb  3 12:33:24.489: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:25.465: INFO: Number of nodes with available pods: 0
Feb  3 12:33:25.465: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:26.558: INFO: Number of nodes with available pods: 0
Feb  3 12:33:26.558: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:27.475: INFO: Number of nodes with available pods: 0
Feb  3 12:33:27.475: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:28.495: INFO: Number of nodes with available pods: 0
Feb  3 12:33:28.495: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:29.476: INFO: Number of nodes with available pods: 0
Feb  3 12:33:29.476: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:30.522: INFO: Number of nodes with available pods: 0
Feb  3 12:33:30.522: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:31.470: INFO: Number of nodes with available pods: 0
Feb  3 12:33:31.470: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:32.519: INFO: Number of nodes with available pods: 0
Feb  3 12:33:32.519: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:33.470: INFO: Number of nodes with available pods: 0
Feb  3 12:33:33.470: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:34.906: INFO: Number of nodes with available pods: 0
Feb  3 12:33:34.907: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:35.940: INFO: Number of nodes with available pods: 0
Feb  3 12:33:35.940: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:36.497: INFO: Number of nodes with available pods: 0
Feb  3 12:33:36.498: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:37.507: INFO: Number of nodes with available pods: 0
Feb  3 12:33:37.507: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:38.489: INFO: Number of nodes with available pods: 0
Feb  3 12:33:38.489: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:39.590: INFO: Number of nodes with available pods: 0
Feb  3 12:33:39.590: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:42.102: INFO: Number of nodes with available pods: 0
Feb  3 12:33:42.102: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:42.961: INFO: Number of nodes with available pods: 0
Feb  3 12:33:42.961: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:43.466: INFO: Number of nodes with available pods: 0
Feb  3 12:33:43.466: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:44.473: INFO: Number of nodes with available pods: 0
Feb  3 12:33:44.474: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:45.474: INFO: Number of nodes with available pods: 0
Feb  3 12:33:45.474: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:33:46.523: INFO: Number of nodes with available pods: 1
Feb  3 12:33:46.523: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-twtgg, will wait for the garbage collector to delete the pods
Feb  3 12:33:46.675: INFO: Deleting DaemonSet.extensions daemon-set took: 48.377891ms
Feb  3 12:33:46.976: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.544614ms
Feb  3 12:34:02.709: INFO: Number of nodes with available pods: 0
Feb  3 12:34:02.709: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 12:34:02.714: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-twtgg/daemonsets","resourceVersion":"20418235"},"items":null}

Feb  3 12:34:02.720: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-twtgg/pods","resourceVersion":"20418235"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:34:02.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-twtgg" for this suite.
Feb  3 12:34:08.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:34:08.964: INFO: namespace: e2e-tests-daemonsets-twtgg, resource: bindings, ignored listing per whitelist
Feb  3 12:34:09.012: INFO: namespace e2e-tests-daemonsets-twtgg deletion completed in 6.264075341s

• [SLOW TEST:61.983 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:34:09.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  3 12:34:09.235: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Feb  3 12:34:09.243: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-56drg/daemonsets","resourceVersion":"20418267"},"items":null}

Feb  3 12:34:09.246: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-56drg/pods","resourceVersion":"20418267"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:34:09.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-56drg" for this suite.
Feb  3 12:34:15.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:34:15.428: INFO: namespace: e2e-tests-daemonsets-56drg, resource: bindings, ignored listing per whitelist
Feb  3 12:34:15.453: INFO: namespace e2e-tests-daemonsets-56drg deletion completed in 6.196213602s

S [SKIPPING] [6.441 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Feb  3 12:34:09.235: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:34:15.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:34:25.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-p82kn" for this suite.
Feb  3 12:35:07.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:35:07.977: INFO: namespace: e2e-tests-kubelet-test-p82kn, resource: bindings, ignored listing per whitelist
Feb  3 12:35:08.207: INFO: namespace e2e-tests-kubelet-test-p82kn deletion completed in 42.346638555s

• [SLOW TEST:52.753 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:35:08.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  3 12:35:08.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:35:18.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-8p45p" for this suite.
Feb  3 12:36:02.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:36:02.791: INFO: namespace: e2e-tests-pods-8p45p, resource: bindings, ignored listing per whitelist
Feb  3 12:36:02.844: INFO: namespace e2e-tests-pods-8p45p deletion completed in 44.167596821s

• [SLOW TEST:54.636 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:36:02.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Feb  3 12:36:02.982: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  3 12:36:02.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6p5rg'
Feb  3 12:36:05.309: INFO: stderr: ""
Feb  3 12:36:05.309: INFO: stdout: "service/redis-slave created\n"
Feb  3 12:36:05.310: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  3 12:36:05.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6p5rg'
Feb  3 12:36:05.781: INFO: stderr: ""
Feb  3 12:36:05.782: INFO: stdout: "service/redis-master created\n"
Feb  3 12:36:05.783: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  3 12:36:05.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6p5rg'
Feb  3 12:36:06.303: INFO: stderr: ""
Feb  3 12:36:06.303: INFO: stdout: "service/frontend created\n"
Feb  3 12:36:06.304: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  3 12:36:06.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6p5rg'
Feb  3 12:36:06.975: INFO: stderr: ""
Feb  3 12:36:06.976: INFO: stdout: "deployment.extensions/frontend created\n"
Feb  3 12:36:06.977: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  3 12:36:06.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6p5rg'
Feb  3 12:36:07.583: INFO: stderr: ""
Feb  3 12:36:07.583: INFO: stdout: "deployment.extensions/redis-master created\n"
Feb  3 12:36:07.584: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  3 12:36:07.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6p5rg'
Feb  3 12:36:08.097: INFO: stderr: ""
Feb  3 12:36:08.097: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Feb  3 12:36:08.097: INFO: Waiting for all frontend pods to be Running.
Feb  3 12:36:38.150: INFO: Waiting for frontend to serve content.
Feb  3 12:36:38.204: INFO: Trying to add a new entry to the guestbook.
Feb  3 12:36:38.235: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb  3 12:36:38.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6p5rg'
Feb  3 12:36:38.707: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 12:36:38.707: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 12:36:38.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6p5rg'
Feb  3 12:36:38.959: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 12:36:38.960: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 12:36:38.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6p5rg'
Feb  3 12:36:39.169: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 12:36:39.170: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 12:36:39.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6p5rg'
Feb  3 12:36:39.321: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 12:36:39.321: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 12:36:39.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6p5rg'
Feb  3 12:36:39.502: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 12:36:39.502: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 12:36:39.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6p5rg'
Feb  3 12:36:39.895: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 12:36:39.895: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:36:39.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6p5rg" for this suite.
Feb  3 12:37:24.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:37:24.156: INFO: namespace: e2e-tests-kubectl-6p5rg, resource: bindings, ignored listing per whitelist
Feb  3 12:37:24.182: INFO: namespace e2e-tests-kubectl-6p5rg deletion completed in 44.186793548s

• [SLOW TEST:81.336 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:37:24.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-eddeb441-4681-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  3 12:37:24.390: INFO: Waiting up to 5m0s for pod "pod-configmaps-eddf8e6b-4681-11ea-ab15-0242ac110005" in namespace "e2e-tests-configmap-89l2x" to be "success or failure"
Feb  3 12:37:24.404: INFO: Pod "pod-configmaps-eddf8e6b-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.663516ms
Feb  3 12:37:26.421: INFO: Pod "pod-configmaps-eddf8e6b-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031078362s
Feb  3 12:37:28.443: INFO: Pod "pod-configmaps-eddf8e6b-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053323702s
Feb  3 12:37:30.608: INFO: Pod "pod-configmaps-eddf8e6b-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.217880989s
Feb  3 12:37:32.656: INFO: Pod "pod-configmaps-eddf8e6b-4681-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265814475s
Feb  3 12:37:34.688: INFO: Pod "pod-configmaps-eddf8e6b-4681-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.298161465s
STEP: Saw pod success
Feb  3 12:37:34.688: INFO: Pod "pod-configmaps-eddf8e6b-4681-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:37:34.748: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-eddf8e6b-4681-11ea-ab15-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  3 12:37:34.964: INFO: Waiting for pod pod-configmaps-eddf8e6b-4681-11ea-ab15-0242ac110005 to disappear
Feb  3 12:37:34.985: INFO: Pod pod-configmaps-eddf8e6b-4681-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:37:34.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-89l2x" for this suite.
Feb  3 12:37:41.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:37:41.114: INFO: namespace: e2e-tests-configmap-89l2x, resource: bindings, ignored listing per whitelist
Feb  3 12:37:41.229: INFO: namespace e2e-tests-configmap-89l2x deletion completed in 6.22917624s

• [SLOW TEST:17.047 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:37:41.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  3 12:37:41.523: INFO: PodSpec: initContainers in spec.initContainers
Feb  3 12:38:55.967: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f8176745-4681-11ea-ab15-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-8z6b9", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-8z6b9/pods/pod-init-f8176745-4681-11ea-ab15-0242ac110005", UID:"f8284335-4681-11ea-a994-fa163e34d433", ResourceVersion:"20418896", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716330261, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"523056692"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8srqq", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00262a000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8srqq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8srqq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8srqq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002592088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00253a000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002592100)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002592120)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002592128), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00259212c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716330262, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716330262, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716330262, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716330261, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc002948040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00019e1c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00019e5b0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://bf3ede83d1c89799a73326d48c7fc8a97a7a7080dca6909028ac4c818753acba"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002948080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002948060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:38:55.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-8z6b9" for this suite.
Feb  3 12:39:20.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:39:20.201: INFO: namespace: e2e-tests-init-container-8z6b9, resource: bindings, ignored listing per whitelist
Feb  3 12:39:20.212: INFO: namespace e2e-tests-init-container-8z6b9 deletion completed in 24.224530061s

• [SLOW TEST:98.982 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:39:20.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-33327859-4682-11ea-ab15-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-33327939-4682-11ea-ab15-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-33327859-4682-11ea-ab15-0242ac110005
STEP: Updating configmap cm-test-opt-upd-33327939-4682-11ea-ab15-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-3332797a-4682-11ea-ab15-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:40:52.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7jthl" for this suite.
Feb  3 12:41:18.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:41:18.643: INFO: namespace: e2e-tests-configmap-7jthl, resource: bindings, ignored listing per whitelist
Feb  3 12:41:18.755: INFO: namespace e2e-tests-configmap-7jthl deletion completed in 26.295329852s

• [SLOW TEST:118.543 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:41:18.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  3 12:41:18.969: INFO: Waiting up to 5m0s for pod "pod-79b095ff-4682-11ea-ab15-0242ac110005" in namespace "e2e-tests-emptydir-c5zqz" to be "success or failure"
Feb  3 12:41:18.986: INFO: Pod "pod-79b095ff-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.175626ms
Feb  3 12:41:21.207: INFO: Pod "pod-79b095ff-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237330251s
Feb  3 12:41:23.227: INFO: Pod "pod-79b095ff-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257858082s
Feb  3 12:41:25.628: INFO: Pod "pod-79b095ff-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659139519s
Feb  3 12:41:27.888: INFO: Pod "pod-79b095ff-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.918284901s
Feb  3 12:41:29.906: INFO: Pod "pod-79b095ff-4682-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.936538712s
STEP: Saw pod success
Feb  3 12:41:29.906: INFO: Pod "pod-79b095ff-4682-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:41:29.925: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-79b095ff-4682-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 12:41:30.212: INFO: Waiting for pod pod-79b095ff-4682-11ea-ab15-0242ac110005 to disappear
Feb  3 12:41:30.222: INFO: Pod pod-79b095ff-4682-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:41:30.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-c5zqz" for this suite.
Feb  3 12:41:36.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:41:36.317: INFO: namespace: e2e-tests-emptydir-c5zqz, resource: bindings, ignored listing per whitelist
Feb  3 12:41:36.415: INFO: namespace e2e-tests-emptydir-c5zqz deletion completed in 6.181498887s

• [SLOW TEST:17.660 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:41:36.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-843e6ce2-4682-11ea-ab15-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-843e6cc2-4682-11ea-ab15-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb  3 12:41:36.680: INFO: Waiting up to 5m0s for pod "projected-volume-843e6c15-4682-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-nkkdw" to be "success or failure"
Feb  3 12:41:36.696: INFO: Pod "projected-volume-843e6c15-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.677518ms
Feb  3 12:41:38.718: INFO: Pod "projected-volume-843e6c15-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038524727s
Feb  3 12:41:40.781: INFO: Pod "projected-volume-843e6c15-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100882006s
Feb  3 12:41:43.348: INFO: Pod "projected-volume-843e6c15-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.667851576s
Feb  3 12:41:45.414: INFO: Pod "projected-volume-843e6c15-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.73373929s
Feb  3 12:41:47.434: INFO: Pod "projected-volume-843e6c15-4682-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.753778261s
STEP: Saw pod success
Feb  3 12:41:47.434: INFO: Pod "projected-volume-843e6c15-4682-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:41:47.447: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-843e6c15-4682-11ea-ab15-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Feb  3 12:41:47.542: INFO: Waiting for pod projected-volume-843e6c15-4682-11ea-ab15-0242ac110005 to disappear
Feb  3 12:41:47.549: INFO: Pod projected-volume-843e6c15-4682-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:41:47.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nkkdw" for this suite.
Feb  3 12:41:55.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:41:55.613: INFO: namespace: e2e-tests-projected-nkkdw, resource: bindings, ignored listing per whitelist
Feb  3 12:41:55.737: INFO: namespace e2e-tests-projected-nkkdw deletion completed in 8.181056326s

• [SLOW TEST:19.322 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:41:55.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-8fc369b8-4682-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  3 12:41:56.135: INFO: Waiting up to 5m0s for pod "pod-secrets-8fd7259a-4682-11ea-ab15-0242ac110005" in namespace "e2e-tests-secrets-nsq4j" to be "success or failure"
Feb  3 12:41:56.148: INFO: Pod "pod-secrets-8fd7259a-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.557246ms
Feb  3 12:41:58.652: INFO: Pod "pod-secrets-8fd7259a-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.516762279s
Feb  3 12:42:00.684: INFO: Pod "pod-secrets-8fd7259a-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.548595101s
Feb  3 12:42:02.724: INFO: Pod "pod-secrets-8fd7259a-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588491464s
Feb  3 12:42:04.753: INFO: Pod "pod-secrets-8fd7259a-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.617283824s
Feb  3 12:42:06.782: INFO: Pod "pod-secrets-8fd7259a-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.646046898s
Feb  3 12:42:08.810: INFO: Pod "pod-secrets-8fd7259a-4682-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.674195933s
STEP: Saw pod success
Feb  3 12:42:08.810: INFO: Pod "pod-secrets-8fd7259a-4682-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:42:08.818: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-8fd7259a-4682-11ea-ab15-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  3 12:42:08.988: INFO: Waiting for pod pod-secrets-8fd7259a-4682-11ea-ab15-0242ac110005 to disappear
Feb  3 12:42:08.998: INFO: Pod pod-secrets-8fd7259a-4682-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:42:08.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-nsq4j" for this suite.
Feb  3 12:42:15.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:42:15.244: INFO: namespace: e2e-tests-secrets-nsq4j, resource: bindings, ignored listing per whitelist
Feb  3 12:42:15.320: INFO: namespace e2e-tests-secrets-nsq4j deletion completed in 6.315586748s

• [SLOW TEST:19.583 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:42:15.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-9b6a0641-4682-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  3 12:42:15.658: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9b6c562f-4682-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-7pgh4" to be "success or failure"
Feb  3 12:42:15.672: INFO: Pod "pod-projected-configmaps-9b6c562f-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.349485ms
Feb  3 12:42:17.684: INFO: Pod "pod-projected-configmaps-9b6c562f-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02519447s
Feb  3 12:42:19.705: INFO: Pod "pod-projected-configmaps-9b6c562f-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045850611s
Feb  3 12:42:21.899: INFO: Pod "pod-projected-configmaps-9b6c562f-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.240106826s
Feb  3 12:42:23.940: INFO: Pod "pod-projected-configmaps-9b6c562f-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28084905s
Feb  3 12:42:25.962: INFO: Pod "pod-projected-configmaps-9b6c562f-4682-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.303310624s
STEP: Saw pod success
Feb  3 12:42:25.962: INFO: Pod "pod-projected-configmaps-9b6c562f-4682-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:42:25.974: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-9b6c562f-4682-11ea-ab15-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 12:42:26.746: INFO: Waiting for pod pod-projected-configmaps-9b6c562f-4682-11ea-ab15-0242ac110005 to disappear
Feb  3 12:42:27.042: INFO: Pod pod-projected-configmaps-9b6c562f-4682-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:42:27.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7pgh4" for this suite.
Feb  3 12:42:33.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:42:34.082: INFO: namespace: e2e-tests-projected-7pgh4, resource: bindings, ignored listing per whitelist
Feb  3 12:42:34.116: INFO: namespace e2e-tests-projected-7pgh4 deletion completed in 7.027722263s

• [SLOW TEST:18.795 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:42:34.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb  3 12:42:34.476: INFO: Waiting up to 5m0s for pod "var-expansion-a6a9c088-4682-11ea-ab15-0242ac110005" in namespace "e2e-tests-var-expansion-pb8f4" to be "success or failure"
Feb  3 12:42:34.530: INFO: Pod "var-expansion-a6a9c088-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 53.704522ms
Feb  3 12:42:36.571: INFO: Pod "var-expansion-a6a9c088-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094576675s
Feb  3 12:42:38.595: INFO: Pod "var-expansion-a6a9c088-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119035663s
Feb  3 12:42:40.781: INFO: Pod "var-expansion-a6a9c088-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.304881674s
Feb  3 12:42:42.790: INFO: Pod "var-expansion-a6a9c088-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314107923s
Feb  3 12:42:44.808: INFO: Pod "var-expansion-a6a9c088-4682-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.331274513s
STEP: Saw pod success
Feb  3 12:42:44.808: INFO: Pod "var-expansion-a6a9c088-4682-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:42:44.814: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-a6a9c088-4682-11ea-ab15-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  3 12:42:45.107: INFO: Waiting for pod var-expansion-a6a9c088-4682-11ea-ab15-0242ac110005 to disappear
Feb  3 12:42:45.133: INFO: Pod var-expansion-a6a9c088-4682-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:42:45.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-pb8f4" for this suite.
Feb  3 12:42:51.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:42:51.207: INFO: namespace: e2e-tests-var-expansion-pb8f4, resource: bindings, ignored listing per whitelist
Feb  3 12:42:51.327: INFO: namespace e2e-tests-var-expansion-pb8f4 deletion completed in 6.183688151s

• [SLOW TEST:17.210 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:42:51.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-mnw6
STEP: Creating a pod to test atomic-volume-subpath
Feb  3 12:42:51.574: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mnw6" in namespace "e2e-tests-subpath-d6sw6" to be "success or failure"
Feb  3 12:42:51.662: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Pending", Reason="", readiness=false. Elapsed: 88.843974ms
Feb  3 12:42:53.793: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219131029s
Feb  3 12:42:55.801: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22792854s
Feb  3 12:42:57.825: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25181653s
Feb  3 12:42:59.848: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.273968932s
Feb  3 12:43:01.872: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.298701034s
Feb  3 12:43:03.890: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.316412328s
Feb  3 12:43:05.941: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.367683353s
Feb  3 12:43:07.954: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Running", Reason="", readiness=false. Elapsed: 16.380039961s
Feb  3 12:43:09.968: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Running", Reason="", readiness=false. Elapsed: 18.39481503s
Feb  3 12:43:11.986: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Running", Reason="", readiness=false. Elapsed: 20.412656284s
Feb  3 12:43:14.038: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Running", Reason="", readiness=false. Elapsed: 22.464216954s
Feb  3 12:43:16.055: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Running", Reason="", readiness=false. Elapsed: 24.480948616s
Feb  3 12:43:18.079: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Running", Reason="", readiness=false. Elapsed: 26.505477393s
Feb  3 12:43:20.101: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Running", Reason="", readiness=false. Elapsed: 28.527440656s
Feb  3 12:43:22.117: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Running", Reason="", readiness=false. Elapsed: 30.543735842s
Feb  3 12:43:24.422: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Running", Reason="", readiness=false. Elapsed: 32.848588866s
Feb  3 12:43:26.437: INFO: Pod "pod-subpath-test-projected-mnw6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.863283218s
STEP: Saw pod success
Feb  3 12:43:26.437: INFO: Pod "pod-subpath-test-projected-mnw6" satisfied condition "success or failure"
Feb  3 12:43:26.461: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-mnw6 container test-container-subpath-projected-mnw6: 
STEP: delete the pod
Feb  3 12:43:28.117: INFO: Waiting for pod pod-subpath-test-projected-mnw6 to disappear
Feb  3 12:43:28.145: INFO: Pod pod-subpath-test-projected-mnw6 no longer exists
STEP: Deleting pod pod-subpath-test-projected-mnw6
Feb  3 12:43:28.146: INFO: Deleting pod "pod-subpath-test-projected-mnw6" in namespace "e2e-tests-subpath-d6sw6"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:43:28.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-d6sw6" for this suite.
Feb  3 12:43:36.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:43:36.424: INFO: namespace: e2e-tests-subpath-d6sw6, resource: bindings, ignored listing per whitelist
Feb  3 12:43:36.570: INFO: namespace e2e-tests-subpath-d6sw6 deletion completed in 8.295289412s

• [SLOW TEST:45.243 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:43:36.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb  3 12:43:44.065: INFO: 10 pods remaining
Feb  3 12:43:44.065: INFO: 8 pods has nil DeletionTimestamp
Feb  3 12:43:44.065: INFO: 
Feb  3 12:43:44.739: INFO: 0 pods remaining
Feb  3 12:43:44.739: INFO: 0 pods has nil DeletionTimestamp
Feb  3 12:43:44.739: INFO: 
STEP: Gathering metrics
W0203 12:43:45.744735       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 12:43:45.744: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:43:45.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-s5cnj" for this suite.
Feb  3 12:43:59.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:43:59.846: INFO: namespace: e2e-tests-gc-s5cnj, resource: bindings, ignored listing per whitelist
Feb  3 12:44:00.005: INFO: namespace e2e-tests-gc-s5cnj deletion completed in 14.251435392s

• [SLOW TEST:23.435 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:44:00.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb  3 12:44:00.218: INFO: Waiting up to 5m0s for pod "client-containers-d9cc4112-4682-11ea-ab15-0242ac110005" in namespace "e2e-tests-containers-cl8z8" to be "success or failure"
Feb  3 12:44:00.238: INFO: Pod "client-containers-d9cc4112-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.826788ms
Feb  3 12:44:02.250: INFO: Pod "client-containers-d9cc4112-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031804786s
Feb  3 12:44:04.270: INFO: Pod "client-containers-d9cc4112-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051675411s
Feb  3 12:44:06.512: INFO: Pod "client-containers-d9cc4112-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.293720273s
Feb  3 12:44:09.163: INFO: Pod "client-containers-d9cc4112-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.944795222s
Feb  3 12:44:11.179: INFO: Pod "client-containers-d9cc4112-4682-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.960476167s
STEP: Saw pod success
Feb  3 12:44:11.179: INFO: Pod "client-containers-d9cc4112-4682-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:44:11.184: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-d9cc4112-4682-11ea-ab15-0242ac110005 container test-container: 
STEP: delete the pod
Feb  3 12:44:11.458: INFO: Waiting for pod client-containers-d9cc4112-4682-11ea-ab15-0242ac110005 to disappear
Feb  3 12:44:11.464: INFO: Pod client-containers-d9cc4112-4682-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:44:11.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-cl8z8" for this suite.
Feb  3 12:44:17.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:44:17.677: INFO: namespace: e2e-tests-containers-cl8z8, resource: bindings, ignored listing per whitelist
Feb  3 12:44:17.679: INFO: namespace e2e-tests-containers-cl8z8 deletion completed in 6.208717128s

• [SLOW TEST:17.673 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:44:17.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-glk6z/secret-test-e45aba49-4682-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  3 12:44:17.987: INFO: Waiting up to 5m0s for pod "pod-configmaps-e45baca9-4682-11ea-ab15-0242ac110005" in namespace "e2e-tests-secrets-glk6z" to be "success or failure"
Feb  3 12:44:18.006: INFO: Pod "pod-configmaps-e45baca9-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.500096ms
Feb  3 12:44:20.018: INFO: Pod "pod-configmaps-e45baca9-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031207371s
Feb  3 12:44:22.051: INFO: Pod "pod-configmaps-e45baca9-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063939441s
Feb  3 12:44:24.361: INFO: Pod "pod-configmaps-e45baca9-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.373779786s
Feb  3 12:44:26.437: INFO: Pod "pod-configmaps-e45baca9-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.449854225s
Feb  3 12:44:28.454: INFO: Pod "pod-configmaps-e45baca9-4682-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.46719076s
Feb  3 12:44:30.481: INFO: Pod "pod-configmaps-e45baca9-4682-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.493944945s
STEP: Saw pod success
Feb  3 12:44:30.481: INFO: Pod "pod-configmaps-e45baca9-4682-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:44:30.487: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e45baca9-4682-11ea-ab15-0242ac110005 container env-test: 
STEP: delete the pod
Feb  3 12:44:30.864: INFO: Waiting for pod pod-configmaps-e45baca9-4682-11ea-ab15-0242ac110005 to disappear
Feb  3 12:44:30.930: INFO: Pod pod-configmaps-e45baca9-4682-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:44:30.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-glk6z" for this suite.
Feb  3 12:44:37.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:44:37.388: INFO: namespace: e2e-tests-secrets-glk6z, resource: bindings, ignored listing per whitelist
Feb  3 12:44:37.455: INFO: namespace e2e-tests-secrets-glk6z deletion completed in 6.323898829s

• [SLOW TEST:19.776 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:44:37.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  3 12:44:37.787: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f02a2cdc-4682-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001e1bbfa), BlockOwnerDeletion:(*bool)(0xc001e1bbfb)}}
Feb  3 12:44:37.954: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f02525ff-4682-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001e1be12), BlockOwnerDeletion:(*bool)(0xc001e1be13)}}
Feb  3 12:44:37.989: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f0288a0c-4682-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00284e502), BlockOwnerDeletion:(*bool)(0xc00284e503)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:44:43.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-fx4gp" for this suite.
Feb  3 12:44:51.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:44:51.288: INFO: namespace: e2e-tests-gc-fx4gp, resource: bindings, ignored listing per whitelist
Feb  3 12:44:51.400: INFO: namespace e2e-tests-gc-fx4gp deletion completed in 8.212487091s

• [SLOW TEST:13.944 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:44:51.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-94gws
Feb  3 12:45:01.720: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-94gws
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 12:45:01.725: INFO: Initial restart count of pod liveness-http is 0
Feb  3 12:45:22.674: INFO: Restart count of pod e2e-tests-container-probe-94gws/liveness-http is now 1 (20.94874355s elapsed)
Feb  3 12:45:43.019: INFO: Restart count of pod e2e-tests-container-probe-94gws/liveness-http is now 2 (41.293319929s elapsed)
Feb  3 12:46:03.346: INFO: Restart count of pod e2e-tests-container-probe-94gws/liveness-http is now 3 (1m1.620309547s elapsed)
Feb  3 12:46:21.496: INFO: Restart count of pod e2e-tests-container-probe-94gws/liveness-http is now 4 (1m19.770711315s elapsed)
Feb  3 12:47:22.364: INFO: Restart count of pod e2e-tests-container-probe-94gws/liveness-http is now 5 (2m20.638665747s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:47:22.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-94gws" for this suite.
Feb  3 12:47:28.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:47:28.640: INFO: namespace: e2e-tests-container-probe-94gws, resource: bindings, ignored listing per whitelist
Feb  3 12:47:28.719: INFO: namespace e2e-tests-container-probe-94gws deletion completed in 6.298438367s

• [SLOW TEST:157.319 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:47:28.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0203 12:47:39.160110       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 12:47:39.160: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:47:39.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-p26ws" for this suite.
Feb  3 12:47:45.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:47:45.276: INFO: namespace: e2e-tests-gc-p26ws, resource: bindings, ignored listing per whitelist
Feb  3 12:47:45.365: INFO: namespace e2e-tests-gc-p26ws deletion completed in 6.197819358s

• [SLOW TEST:16.645 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:47:45.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb  3 12:47:45.629: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mpmdj,SelfLink:/api/v1/namespaces/e2e-tests-watch-mpmdj/configmaps/e2e-watch-test-label-changed,UID:60230b4e-4683-11ea-a994-fa163e34d433,ResourceVersion:20420020,Generation:0,CreationTimestamp:2020-02-03 12:47:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 12:47:45.630: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mpmdj,SelfLink:/api/v1/namespaces/e2e-tests-watch-mpmdj/configmaps/e2e-watch-test-label-changed,UID:60230b4e-4683-11ea-a994-fa163e34d433,ResourceVersion:20420021,Generation:0,CreationTimestamp:2020-02-03 12:47:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  3 12:47:45.630: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mpmdj,SelfLink:/api/v1/namespaces/e2e-tests-watch-mpmdj/configmaps/e2e-watch-test-label-changed,UID:60230b4e-4683-11ea-a994-fa163e34d433,ResourceVersion:20420022,Generation:0,CreationTimestamp:2020-02-03 12:47:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb  3 12:47:55.695: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mpmdj,SelfLink:/api/v1/namespaces/e2e-tests-watch-mpmdj/configmaps/e2e-watch-test-label-changed,UID:60230b4e-4683-11ea-a994-fa163e34d433,ResourceVersion:20420035,Generation:0,CreationTimestamp:2020-02-03 12:47:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 12:47:55.695: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mpmdj,SelfLink:/api/v1/namespaces/e2e-tests-watch-mpmdj/configmaps/e2e-watch-test-label-changed,UID:60230b4e-4683-11ea-a994-fa163e34d433,ResourceVersion:20420036,Generation:0,CreationTimestamp:2020-02-03 12:47:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb  3 12:47:55.695: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mpmdj,SelfLink:/api/v1/namespaces/e2e-tests-watch-mpmdj/configmaps/e2e-watch-test-label-changed,UID:60230b4e-4683-11ea-a994-fa163e34d433,ResourceVersion:20420037,Generation:0,CreationTimestamp:2020-02-03 12:47:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:47:55.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-mpmdj" for this suite.
Feb  3 12:48:01.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:48:01.897: INFO: namespace: e2e-tests-watch-mpmdj, resource: bindings, ignored listing per whitelist
Feb  3 12:48:01.952: INFO: namespace e2e-tests-watch-mpmdj deletion completed in 6.248152277s

• [SLOW TEST:16.587 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:48:01.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-69fdbe16-4683-11ea-ab15-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-69fdbef7-4683-11ea-ab15-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-69fdbe16-4683-11ea-ab15-0242ac110005
STEP: Updating configmap cm-test-opt-upd-69fdbef7-4683-11ea-ab15-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-69fdbf27-4683-11ea-ab15-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:49:45.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mffnk" for this suite.
Feb  3 12:50:09.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:50:09.340: INFO: namespace: e2e-tests-projected-mffnk, resource: bindings, ignored listing per whitelist
Feb  3 12:50:09.407: INFO: namespace e2e-tests-projected-mffnk deletion completed in 24.3329162s

• [SLOW TEST:127.454 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:50:09.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  3 12:50:09.747: INFO: Number of nodes with available pods: 0
Feb  3 12:50:09.747: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:10.766: INFO: Number of nodes with available pods: 0
Feb  3 12:50:10.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:11.775: INFO: Number of nodes with available pods: 0
Feb  3 12:50:11.775: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:12.765: INFO: Number of nodes with available pods: 0
Feb  3 12:50:12.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:13.805: INFO: Number of nodes with available pods: 0
Feb  3 12:50:13.805: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:15.046: INFO: Number of nodes with available pods: 0
Feb  3 12:50:15.046: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:16.366: INFO: Number of nodes with available pods: 0
Feb  3 12:50:16.366: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:17.000: INFO: Number of nodes with available pods: 0
Feb  3 12:50:17.000: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:17.764: INFO: Number of nodes with available pods: 0
Feb  3 12:50:17.764: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:18.846: INFO: Number of nodes with available pods: 0
Feb  3 12:50:18.846: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:19.928: INFO: Number of nodes with available pods: 1
Feb  3 12:50:19.928: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb  3 12:50:20.019: INFO: Number of nodes with available pods: 0
Feb  3 12:50:20.020: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:21.073: INFO: Number of nodes with available pods: 0
Feb  3 12:50:21.074: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:22.130: INFO: Number of nodes with available pods: 0
Feb  3 12:50:22.130: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:23.254: INFO: Number of nodes with available pods: 0
Feb  3 12:50:23.254: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:24.064: INFO: Number of nodes with available pods: 0
Feb  3 12:50:24.064: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:25.126: INFO: Number of nodes with available pods: 0
Feb  3 12:50:25.127: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:26.062: INFO: Number of nodes with available pods: 0
Feb  3 12:50:26.062: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:27.761: INFO: Number of nodes with available pods: 0
Feb  3 12:50:27.761: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:28.288: INFO: Number of nodes with available pods: 0
Feb  3 12:50:28.288: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:29.532: INFO: Number of nodes with available pods: 0
Feb  3 12:50:29.532: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:30.061: INFO: Number of nodes with available pods: 0
Feb  3 12:50:30.061: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:31.054: INFO: Number of nodes with available pods: 0
Feb  3 12:50:31.054: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  3 12:50:32.074: INFO: Number of nodes with available pods: 1
Feb  3 12:50:32.074: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-wgnn7, will wait for the garbage collector to delete the pods
Feb  3 12:50:32.165: INFO: Deleting DaemonSet.extensions daemon-set took: 26.309ms
Feb  3 12:50:32.266: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.492223ms
Feb  3 12:50:39.295: INFO: Number of nodes with available pods: 0
Feb  3 12:50:39.295: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 12:50:39.301: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-wgnn7/daemonsets","resourceVersion":"20420322"},"items":null}

Feb  3 12:50:39.307: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-wgnn7/pods","resourceVersion":"20420322"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:50:39.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-wgnn7" for this suite.
Feb  3 12:50:45.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:50:45.441: INFO: namespace: e2e-tests-daemonsets-wgnn7, resource: bindings, ignored listing per whitelist
Feb  3 12:50:45.483: INFO: namespace e2e-tests-daemonsets-wgnn7 deletion completed in 6.154765157s

• [SLOW TEST:36.076 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:50:45.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Feb  3 12:50:46.211: INFO: Waiting up to 5m0s for pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp" in namespace "e2e-tests-svcaccounts-q5blx" to be "success or failure"
Feb  3 12:50:46.254: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp": Phase="Pending", Reason="", readiness=false. Elapsed: 42.547724ms
Feb  3 12:50:48.279: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067324814s
Feb  3 12:50:50.299: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087071634s
Feb  3 12:50:52.330: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118408303s
Feb  3 12:50:54.394: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181954557s
Feb  3 12:50:56.424: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.212814474s
Feb  3 12:50:58.455: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.243017554s
Feb  3 12:51:00.485: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.273279932s
Feb  3 12:51:02.510: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.29849386s
STEP: Saw pod success
Feb  3 12:51:02.510: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp" satisfied condition "success or failure"
Feb  3 12:51:02.527: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp container token-test: 
STEP: delete the pod
Feb  3 12:51:02.797: INFO: Waiting for pod pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp to disappear
Feb  3 12:51:02.826: INFO: Pod pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-mtdxp no longer exists
STEP: Creating a pod to test consume service account root CA
Feb  3 12:51:02.838: INFO: Waiting up to 5m0s for pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-j5vhz" in namespace "e2e-tests-svcaccounts-q5blx" to be "success or failure"
Feb  3 12:51:02.866: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-j5vhz": Phase="Pending", Reason="", readiness=false. Elapsed: 27.195992ms
Feb  3 12:51:05.473: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-j5vhz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.634983959s
Feb  3 12:51:07.508: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-j5vhz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.669343744s
Feb  3 12:51:09.929: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-j5vhz": Phase="Pending", Reason="", readiness=false. Elapsed: 7.090981959s
Feb  3 12:51:12.208: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-j5vhz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.369391058s
Feb  3 12:51:14.221: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-j5vhz": Phase="Pending", Reason="", readiness=false. Elapsed: 11.382996336s
Feb  3 12:51:16.234: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-j5vhz": Phase="Pending", Reason="", readiness=false. Elapsed: 13.395757755s
Feb  3 12:51:18.252: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-j5vhz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.413572429s
STEP: Saw pod success
Feb  3 12:51:18.252: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-j5vhz" satisfied condition "success or failure"
Feb  3 12:51:18.264: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-j5vhz container root-ca-test: 
STEP: delete the pod
Feb  3 12:51:18.369: INFO: Waiting for pod pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-j5vhz to disappear
Feb  3 12:51:18.459: INFO: Pod pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-j5vhz no longer exists
STEP: Creating a pod to test consume service account namespace
Feb  3 12:51:18.505: INFO: Waiting up to 5m0s for pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m" in namespace "e2e-tests-svcaccounts-q5blx" to be "success or failure"
Feb  3 12:51:18.557: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m": Phase="Pending", Reason="", readiness=false. Elapsed: 51.003012ms
Feb  3 12:51:21.091: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.5852981s
Feb  3 12:51:23.102: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.596119003s
Feb  3 12:51:25.137: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.631912972s
Feb  3 12:51:27.166: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.6607839s
Feb  3 12:51:29.379: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.872997355s
Feb  3 12:51:31.405: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m": Phase="Pending", Reason="", readiness=false. Elapsed: 12.899405272s
Feb  3 12:51:33.451: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m": Phase="Pending", Reason="", readiness=false. Elapsed: 14.945803821s
Feb  3 12:51:35.464: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m": Phase="Pending", Reason="", readiness=false. Elapsed: 16.958311402s
Feb  3 12:51:37.482: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.976471995s
STEP: Saw pod success
Feb  3 12:51:37.482: INFO: Pod "pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m" satisfied condition "success or failure"
Feb  3 12:51:37.491: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m container namespace-test: 
STEP: delete the pod
Feb  3 12:51:38.269: INFO: Waiting for pod pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m to disappear
Feb  3 12:51:38.292: INFO: Pod pod-service-account-cbca316b-4683-11ea-ab15-0242ac110005-bvw9m no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:51:38.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-q5blx" for this suite.
Feb  3 12:51:46.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:51:46.578: INFO: namespace: e2e-tests-svcaccounts-q5blx, resource: bindings, ignored listing per whitelist
Feb  3 12:51:46.690: INFO: namespace e2e-tests-svcaccounts-q5blx deletion completed in 8.376355457s

• [SLOW TEST:61.205 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:51:46.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  3 12:51:46.944: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bhmn7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bhmn7/configmaps/e2e-watch-test-configmap-a,UID:efffc84a-4683-11ea-a994-fa163e34d433,ResourceVersion:20420510,Generation:0,CreationTimestamp:2020-02-03 12:51:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 12:51:46.945: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bhmn7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bhmn7/configmaps/e2e-watch-test-configmap-a,UID:efffc84a-4683-11ea-a994-fa163e34d433,ResourceVersion:20420510,Generation:0,CreationTimestamp:2020-02-03 12:51:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  3 12:51:56.959: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bhmn7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bhmn7/configmaps/e2e-watch-test-configmap-a,UID:efffc84a-4683-11ea-a994-fa163e34d433,ResourceVersion:20420523,Generation:0,CreationTimestamp:2020-02-03 12:51:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  3 12:51:56.960: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bhmn7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bhmn7/configmaps/e2e-watch-test-configmap-a,UID:efffc84a-4683-11ea-a994-fa163e34d433,ResourceVersion:20420523,Generation:0,CreationTimestamp:2020-02-03 12:51:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  3 12:52:06.983: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bhmn7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bhmn7/configmaps/e2e-watch-test-configmap-a,UID:efffc84a-4683-11ea-a994-fa163e34d433,ResourceVersion:20420536,Generation:0,CreationTimestamp:2020-02-03 12:51:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 12:52:06.984: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bhmn7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bhmn7/configmaps/e2e-watch-test-configmap-a,UID:efffc84a-4683-11ea-a994-fa163e34d433,ResourceVersion:20420536,Generation:0,CreationTimestamp:2020-02-03 12:51:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  3 12:52:17.011: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bhmn7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bhmn7/configmaps/e2e-watch-test-configmap-a,UID:efffc84a-4683-11ea-a994-fa163e34d433,ResourceVersion:20420549,Generation:0,CreationTimestamp:2020-02-03 12:51:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 12:52:17.012: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bhmn7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bhmn7/configmaps/e2e-watch-test-configmap-a,UID:efffc84a-4683-11ea-a994-fa163e34d433,ResourceVersion:20420549,Generation:0,CreationTimestamp:2020-02-03 12:51:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  3 12:52:27.035: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-bhmn7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bhmn7/configmaps/e2e-watch-test-configmap-b,UID:07e3b684-4684-11ea-a994-fa163e34d433,ResourceVersion:20420562,Generation:0,CreationTimestamp:2020-02-03 12:52:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 12:52:27.035: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-bhmn7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bhmn7/configmaps/e2e-watch-test-configmap-b,UID:07e3b684-4684-11ea-a994-fa163e34d433,ResourceVersion:20420562,Generation:0,CreationTimestamp:2020-02-03 12:52:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  3 12:52:37.056: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-bhmn7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bhmn7/configmaps/e2e-watch-test-configmap-b,UID:07e3b684-4684-11ea-a994-fa163e34d433,ResourceVersion:20420575,Generation:0,CreationTimestamp:2020-02-03 12:52:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 12:52:37.056: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-bhmn7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bhmn7/configmaps/e2e-watch-test-configmap-b,UID:07e3b684-4684-11ea-a994-fa163e34d433,ResourceVersion:20420575,Generation:0,CreationTimestamp:2020-02-03 12:52:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:52:47.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-bhmn7" for this suite.
Feb  3 12:52:53.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:52:53.266: INFO: namespace: e2e-tests-watch-bhmn7, resource: bindings, ignored listing per whitelist
Feb  3 12:52:53.494: INFO: namespace e2e-tests-watch-bhmn7 deletion completed in 6.405617766s

• [SLOW TEST:66.803 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:52:53.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-17c9fb98-4684-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  3 12:52:53.712: INFO: Waiting up to 5m0s for pod "pod-configmaps-17caf850-4684-11ea-ab15-0242ac110005" in namespace "e2e-tests-configmap-x2svk" to be "success or failure"
Feb  3 12:52:53.723: INFO: Pod "pod-configmaps-17caf850-4684-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.039192ms
Feb  3 12:52:55.786: INFO: Pod "pod-configmaps-17caf850-4684-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07389139s
Feb  3 12:52:57.799: INFO: Pod "pod-configmaps-17caf850-4684-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087260755s
Feb  3 12:53:00.577: INFO: Pod "pod-configmaps-17caf850-4684-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.865228559s
Feb  3 12:53:02.622: INFO: Pod "pod-configmaps-17caf850-4684-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.909524604s
Feb  3 12:53:04.652: INFO: Pod "pod-configmaps-17caf850-4684-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.939461282s
STEP: Saw pod success
Feb  3 12:53:04.652: INFO: Pod "pod-configmaps-17caf850-4684-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 12:53:04.687: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-17caf850-4684-11ea-ab15-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  3 12:53:04.866: INFO: Waiting for pod pod-configmaps-17caf850-4684-11ea-ab15-0242ac110005 to disappear
Feb  3 12:53:04.874: INFO: Pod pod-configmaps-17caf850-4684-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 12:53:04.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-x2svk" for this suite.
Feb  3 12:53:10.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 12:53:11.046: INFO: namespace: e2e-tests-configmap-x2svk, resource: bindings, ignored listing per whitelist
Feb  3 12:53:11.120: INFO: namespace e2e-tests-configmap-x2svk deletion completed in 6.233901641s

• [SLOW TEST:17.625 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 12:53:11.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  3 12:53:13.052: INFO: Pod name wrapped-volume-race-232e98eb-4684-11ea-ab15-0242ac110005: Found 0 pods out of 5
Feb  3 12:53:18.070: INFO: Pod name wrapped-volume-race-232e98eb-4684-11ea-ab15-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-232e98eb-4684-11ea-ab15-0242ac110005 in namespace e2e-tests-emptydir-wrapper-7jntl, will wait for the garbage collector to delete the pods
Feb  3 12:55:01.047: INFO: Deleting ReplicationController wrapped-volume-race-232e98eb-4684-11ea-ab15-0242ac110005 took: 112.686089ms
Feb  3 12:55:01.448: INFO: Terminating ReplicationController wrapped-volume-race-232e98eb-4684-11ea-ab15-0242ac110005 pods took: 401.317653ms
STEP: Creating RC which spawns configmap-volume pods
Feb  3 12:55:53.418: INFO: Pod name wrapped-volume-race-82dede43-4684-11ea-ab15-0242ac110005: Found 0 pods out of 5
Feb  3 12:55:58.452: INFO: Pod name wrapped-volume-race-82dede43-4684-11ea-ab15-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-82dede43-4684-11ea-ab15-0242ac110005 in namespace e2e-tests-emptydir-wrapper-7jntl, will wait for the garbage collector to delete the pods
Feb  3 12:58:14.888: INFO: Deleting ReplicationController wrapped-volume-race-82dede43-4684-11ea-ab15-0242ac110005 took: 28.157125ms
Feb  3 12:58:15.288: INFO: Terminating ReplicationController wrapped-volume-race-82dede43-4684-11ea-ab15-0242ac110005 pods took: 400.751313ms
STEP: Creating RC which spawns configmap-volume pods
Feb  3 12:59:04.164: INFO: Pod name wrapped-volume-race-f461c584-4684-11ea-ab15-0242ac110005: Found 0 pods out of 5
Feb  3 12:59:09.215: INFO: Pod name wrapped-volume-race-f461c584-4684-11ea-ab15-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f461c584-4684-11ea-ab15-0242ac110005 in namespace e2e-tests-emptydir-wrapper-7jntl, will wait for the garbage collector to delete the pods
Feb  3 13:01:43.380: INFO: Deleting ReplicationController wrapped-volume-race-f461c584-4684-11ea-ab15-0242ac110005 took: 39.333314ms
Feb  3 13:01:43.681: INFO: Terminating ReplicationController wrapped-volume-race-f461c584-4684-11ea-ab15-0242ac110005 pods took: 300.959104ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:02:35.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-7jntl" for this suite.
Feb  3 13:02:45.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:02:45.889: INFO: namespace: e2e-tests-emptydir-wrapper-7jntl, resource: bindings, ignored listing per whitelist
Feb  3 13:02:46.044: INFO: namespace e2e-tests-emptydir-wrapper-7jntl deletion completed in 10.247834404s

• [SLOW TEST:574.923 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:02:46.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xb9hx
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  3 13:02:46.283: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  3 13:03:26.869: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-xb9hx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 13:03:26.869: INFO: >>> kubeConfig: /root/.kube/config
I0203 13:03:26.960084       8 log.go:172] (0xc0016044d0) (0xc001ece6e0) Create stream
I0203 13:03:26.960318       8 log.go:172] (0xc0016044d0) (0xc001ece6e0) Stream added, broadcasting: 1
I0203 13:03:26.970487       8 log.go:172] (0xc0016044d0) Reply frame received for 1
I0203 13:03:26.970538       8 log.go:172] (0xc0016044d0) (0xc000a5a820) Create stream
I0203 13:03:26.970582       8 log.go:172] (0xc0016044d0) (0xc000a5a820) Stream added, broadcasting: 3
I0203 13:03:26.971849       8 log.go:172] (0xc0016044d0) Reply frame received for 3
I0203 13:03:26.971884       8 log.go:172] (0xc0016044d0) (0xc001ece780) Create stream
I0203 13:03:26.971898       8 log.go:172] (0xc0016044d0) (0xc001ece780) Stream added, broadcasting: 5
I0203 13:03:26.975029       8 log.go:172] (0xc0016044d0) Reply frame received for 5
I0203 13:03:27.127745       8 log.go:172] (0xc0016044d0) Data frame received for 3
I0203 13:03:27.127921       8 log.go:172] (0xc000a5a820) (3) Data frame handling
I0203 13:03:27.127962       8 log.go:172] (0xc000a5a820) (3) Data frame sent
I0203 13:03:27.291333       8 log.go:172] (0xc0016044d0) (0xc000a5a820) Stream removed, broadcasting: 3
I0203 13:03:27.291503       8 log.go:172] (0xc0016044d0) Data frame received for 1
I0203 13:03:27.291560       8 log.go:172] (0xc001ece6e0) (1) Data frame handling
I0203 13:03:27.291617       8 log.go:172] (0xc001ece6e0) (1) Data frame sent
I0203 13:03:27.291694       8 log.go:172] (0xc0016044d0) (0xc001ece780) Stream removed, broadcasting: 5
I0203 13:03:27.291755       8 log.go:172] (0xc0016044d0) (0xc001ece6e0) Stream removed, broadcasting: 1
I0203 13:03:27.291776       8 log.go:172] (0xc0016044d0) Go away received
I0203 13:03:27.292616       8 log.go:172] (0xc0016044d0) (0xc001ece6e0) Stream removed, broadcasting: 1
I0203 13:03:27.292654       8 log.go:172] (0xc0016044d0) (0xc000a5a820) Stream removed, broadcasting: 3
I0203 13:03:27.292661       8 log.go:172] (0xc0016044d0) (0xc001ece780) Stream removed, broadcasting: 5
Feb  3 13:03:27.292: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:03:27.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-xb9hx" for this suite.
Feb  3 13:03:55.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:03:55.459: INFO: namespace: e2e-tests-pod-network-test-xb9hx, resource: bindings, ignored listing per whitelist
Feb  3 13:03:55.561: INFO: namespace e2e-tests-pod-network-test-xb9hx deletion completed in 28.233699131s

• [SLOW TEST:69.517 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:03:55.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:03:55.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-zwhxh" for this suite.
Feb  3 13:04:01.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:04:02.072: INFO: namespace: e2e-tests-kubelet-test-zwhxh, resource: bindings, ignored listing per whitelist
Feb  3 13:04:02.250: INFO: namespace e2e-tests-kubelet-test-zwhxh deletion completed in 6.359656156s

• [SLOW TEST:6.688 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:04:02.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  3 13:04:02.413: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  3 13:04:02.420: INFO: Waiting for terminating namespaces to be deleted...
Feb  3 13:04:02.423: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  3 13:04:02.451: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  3 13:04:02.451: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  3 13:04:02.451: INFO: 	Container weave ready: true, restart count 0
Feb  3 13:04:02.451: INFO: 	Container weave-npc ready: true, restart count 0
Feb  3 13:04:02.451: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  3 13:04:02.451: INFO: 	Container coredns ready: true, restart count 0
Feb  3 13:04:02.451: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  3 13:04:02.451: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  3 13:04:02.451: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  3 13:04:02.451: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  3 13:04:02.451: INFO: 	Container coredns ready: true, restart count 0
Feb  3 13:04:02.451: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  3 13:04:02.451: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Feb  3 13:04:02.644: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb  3 13:04:02.644: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb  3 13:04:02.644: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb  3 13:04:02.644: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Feb  3 13:04:02.644: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Feb  3 13:04:02.644: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb  3 13:04:02.644: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb  3 13:04:02.644: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a6837050-4685-11ea-ab15-0242ac110005.15efe6256f531a5e], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-j6vjl/filler-pod-a6837050-4685-11ea-ab15-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a6837050-4685-11ea-ab15-0242ac110005.15efe626e8828614], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a6837050-4685-11ea-ab15-0242ac110005.15efe627da9a8a17], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a6837050-4685-11ea-ab15-0242ac110005.15efe62818eea2c7], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15efe628b60a0139], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:04:17.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-j6vjl" for this suite.
Feb  3 13:04:26.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:04:26.357: INFO: namespace: e2e-tests-sched-pred-j6vjl, resource: bindings, ignored listing per whitelist
Feb  3 13:04:26.530: INFO: namespace e2e-tests-sched-pred-j6vjl deletion completed in 8.503503659s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:24.281 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:04:26.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-sbhgv/configmap-test-b5217fba-4685-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  3 13:04:27.328: INFO: Waiting up to 5m0s for pod "pod-configmaps-b5340ac8-4685-11ea-ab15-0242ac110005" in namespace "e2e-tests-configmap-sbhgv" to be "success or failure"
Feb  3 13:04:27.373: INFO: Pod "pod-configmaps-b5340ac8-4685-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.502466ms
Feb  3 13:04:29.836: INFO: Pod "pod-configmaps-b5340ac8-4685-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.507784581s
Feb  3 13:04:31.855: INFO: Pod "pod-configmaps-b5340ac8-4685-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.526593195s
Feb  3 13:04:33.879: INFO: Pod "pod-configmaps-b5340ac8-4685-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.550872353s
Feb  3 13:04:35.897: INFO: Pod "pod-configmaps-b5340ac8-4685-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.568751105s
Feb  3 13:04:38.100: INFO: Pod "pod-configmaps-b5340ac8-4685-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.771595307s
Feb  3 13:04:40.749: INFO: Pod "pod-configmaps-b5340ac8-4685-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.420863527s
STEP: Saw pod success
Feb  3 13:04:40.749: INFO: Pod "pod-configmaps-b5340ac8-4685-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 13:04:40.774: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b5340ac8-4685-11ea-ab15-0242ac110005 container env-test: 
STEP: delete the pod
Feb  3 13:04:41.648: INFO: Waiting for pod pod-configmaps-b5340ac8-4685-11ea-ab15-0242ac110005 to disappear
Feb  3 13:04:41.716: INFO: Pod pod-configmaps-b5340ac8-4685-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:04:41.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-sbhgv" for this suite.
Feb  3 13:04:49.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:04:49.787: INFO: namespace: e2e-tests-configmap-sbhgv, resource: bindings, ignored listing per whitelist
Feb  3 13:04:50.013: INFO: namespace e2e-tests-configmap-sbhgv deletion completed in 8.290652245s

• [SLOW TEST:23.480 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:04:50.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  3 13:04:50.391: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  3 13:04:55.414: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  3 13:05:03.459: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  3 13:05:05.496: INFO: Creating deployment "test-rollover-deployment"
Feb  3 13:05:05.630: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  3 13:05:07.671: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  3 13:05:07.687: INFO: Ensure that both replica sets have 1 created replica
Feb  3 13:05:07.695: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  3 13:05:07.706: INFO: Updating deployment test-rollover-deployment
Feb  3 13:05:07.706: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  3 13:05:10.845: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  3 13:05:10.889: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  3 13:05:11.540: INFO: all replica sets need to contain the pod-template-hash label
Feb  3 13:05:11.540: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331909, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:05:13.565: INFO: all replica sets need to contain the pod-template-hash label
Feb  3 13:05:13.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331909, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:05:15.557: INFO: all replica sets need to contain the pod-template-hash label
Feb  3 13:05:15.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331909, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:05:17.567: INFO: all replica sets need to contain the pod-template-hash label
Feb  3 13:05:17.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331909, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:05:19.563: INFO: all replica sets need to contain the pod-template-hash label
Feb  3 13:05:19.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331909, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:05:21.566: INFO: all replica sets need to contain the pod-template-hash label
Feb  3 13:05:21.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331920, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:05:23.569: INFO: all replica sets need to contain the pod-template-hash label
Feb  3 13:05:23.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331920, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:05:25.588: INFO: all replica sets need to contain the pod-template-hash label
Feb  3 13:05:25.588: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331920, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:05:27.572: INFO: all replica sets need to contain the pod-template-hash label
Feb  3 13:05:27.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331920, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:05:29.562: INFO: all replica sets need to contain the pod-template-hash label
Feb  3 13:05:29.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331920, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331905, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:05:31.832: INFO: 
Feb  3 13:05:31.832: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  3 13:05:31.849: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-6q5cr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6q5cr/deployments/test-rollover-deployment,UID:cbfafc61-4685-11ea-a994-fa163e34d433,ResourceVersion:20422111,Generation:2,CreationTimestamp:2020-02-03 13:05:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-03 13:05:05 +0000 UTC 2020-02-03 13:05:05 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-03 13:05:31 +0000 UTC 2020-02-03 13:05:05 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  3 13:05:31.857: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-6q5cr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6q5cr/replicasets/test-rollover-deployment-5b8479fdb6,UID:cd4cb616-4685-11ea-a994-fa163e34d433,ResourceVersion:20422101,Generation:2,CreationTimestamp:2020-02-03 13:05:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment cbfafc61-4685-11ea-a994-fa163e34d433 0xc0022303e7 0xc0022303e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  3 13:05:31.857: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  3 13:05:31.858: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-6q5cr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6q5cr/replicasets/test-rollover-controller,UID:c2f41d5e-4685-11ea-a994-fa163e34d433,ResourceVersion:20422109,Generation:2,CreationTimestamp:2020-02-03 13:04:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment cbfafc61-4685-11ea-a994-fa163e34d433 0xc0022301cf 0xc0022301e0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  3 13:05:31.858: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-6q5cr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6q5cr/replicasets/test-rollover-deployment-58494b7559,UID:cc13aac6-4685-11ea-a994-fa163e34d433,ResourceVersion:20422067,Generation:2,CreationTimestamp:2020-02-03 13:05:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment cbfafc61-4685-11ea-a994-fa163e34d433 0xc0022302d7 0xc0022302d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  3 13:05:31.866: INFO: Pod "test-rollover-deployment-5b8479fdb6-2pb66" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-2pb66,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-6q5cr,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6q5cr/pods/test-rollover-deployment-5b8479fdb6-2pb66,UID:ce16bc36-4685-11ea-a994-fa163e34d433,ResourceVersion:20422086,Generation:0,CreationTimestamp:2020-02-03 13:05:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 cd4cb616-4685-11ea-a994-fa163e34d433 0xc002458c57 0xc002458c58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ckfpq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ckfpq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-ckfpq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002458cc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002458ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:05:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:05:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:05:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:05:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-03 13:05:09 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-03 13:05:19 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://abfc479fa24f1eda3a484ce18ae638972913421539cade1de1e4131a09c42249}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:05:31.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-6q5cr" for this suite.
Feb  3 13:05:42.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:05:42.329: INFO: namespace: e2e-tests-deployment-6q5cr, resource: bindings, ignored listing per whitelist
Feb  3 13:05:42.350: INFO: namespace e2e-tests-deployment-6q5cr deletion completed in 10.467872063s

• [SLOW TEST:52.335 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:05:42.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-p9dhh
I0203 13:05:42.614272       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-p9dhh, replica count: 1
I0203 13:05:43.665211       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 13:05:44.665683       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 13:05:45.666116       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 13:05:46.666902       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 13:05:47.667533       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 13:05:48.668860       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 13:05:49.669841       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 13:05:50.670264       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 13:05:51.670847       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  3 13:05:51.906: INFO: Created: latency-svc-w46bn
Feb  3 13:05:51.932: INFO: Got endpoints: latency-svc-w46bn [161.656792ms]
Feb  3 13:05:52.142: INFO: Created: latency-svc-v6zx7
Feb  3 13:05:52.178: INFO: Got endpoints: latency-svc-v6zx7 [243.106364ms]
Feb  3 13:05:52.400: INFO: Created: latency-svc-2tsg8
Feb  3 13:05:52.426: INFO: Got endpoints: latency-svc-2tsg8 [493.472926ms]
Feb  3 13:05:52.556: INFO: Created: latency-svc-2rfb2
Feb  3 13:05:52.580: INFO: Got endpoints: latency-svc-2rfb2 [644.122355ms]
Feb  3 13:05:52.737: INFO: Created: latency-svc-w26f9
Feb  3 13:05:52.751: INFO: Got endpoints: latency-svc-w26f9 [816.563876ms]
Feb  3 13:05:52.838: INFO: Created: latency-svc-844zk
Feb  3 13:05:52.918: INFO: Got endpoints: latency-svc-844zk [983.330054ms]
Feb  3 13:05:52.934: INFO: Created: latency-svc-wv4ld
Feb  3 13:05:52.963: INFO: Got endpoints: latency-svc-wv4ld [1.02751278s]
Feb  3 13:05:53.203: INFO: Created: latency-svc-qhdr2
Feb  3 13:05:53.203: INFO: Got endpoints: latency-svc-qhdr2 [1.267283758s]
Feb  3 13:05:53.353: INFO: Created: latency-svc-tm78q
Feb  3 13:05:53.353: INFO: Got endpoints: latency-svc-tm78q [1.416898898s]
Feb  3 13:05:53.498: INFO: Created: latency-svc-psmnd
Feb  3 13:05:53.522: INFO: Got endpoints: latency-svc-psmnd [1.585761238s]
Feb  3 13:05:53.549: INFO: Created: latency-svc-p6sk5
Feb  3 13:05:53.584: INFO: Got endpoints: latency-svc-p6sk5 [1.648192358s]
Feb  3 13:05:53.772: INFO: Created: latency-svc-vmjt4
Feb  3 13:05:53.812: INFO: Got endpoints: latency-svc-vmjt4 [1.875467661s]
Feb  3 13:05:53.827: INFO: Created: latency-svc-m8blv
Feb  3 13:05:53.939: INFO: Got endpoints: latency-svc-m8blv [2.003822398s]
Feb  3 13:05:53.956: INFO: Created: latency-svc-mc2t2
Feb  3 13:05:53.971: INFO: Got endpoints: latency-svc-mc2t2 [2.034886265s]
Feb  3 13:05:54.140: INFO: Created: latency-svc-4mknd
Feb  3 13:05:54.147: INFO: Got endpoints: latency-svc-4mknd [2.211140665s]
Feb  3 13:05:54.241: INFO: Created: latency-svc-hj96d
Feb  3 13:05:54.375: INFO: Got endpoints: latency-svc-hj96d [2.438846641s]
Feb  3 13:05:54.465: INFO: Created: latency-svc-927sp
Feb  3 13:05:54.469: INFO: Got endpoints: latency-svc-927sp [2.291308121s]
Feb  3 13:05:54.663: INFO: Created: latency-svc-g8n2j
Feb  3 13:05:54.740: INFO: Created: latency-svc-5p8pp
Feb  3 13:05:54.747: INFO: Got endpoints: latency-svc-g8n2j [2.320523179s]
Feb  3 13:05:54.826: INFO: Got endpoints: latency-svc-5p8pp [2.246420427s]
Feb  3 13:05:54.846: INFO: Created: latency-svc-n4x5v
Feb  3 13:05:54.856: INFO: Got endpoints: latency-svc-n4x5v [2.104265702s]
Feb  3 13:05:54.913: INFO: Created: latency-svc-lltkz
Feb  3 13:05:54.989: INFO: Got endpoints: latency-svc-lltkz [2.070631732s]
Feb  3 13:05:55.004: INFO: Created: latency-svc-d44lv
Feb  3 13:05:55.018: INFO: Got endpoints: latency-svc-d44lv [2.054851272s]
Feb  3 13:05:55.061: INFO: Created: latency-svc-v2hwz
Feb  3 13:05:55.074: INFO: Got endpoints: latency-svc-v2hwz [1.870624508s]
Feb  3 13:05:55.237: INFO: Created: latency-svc-h5c2f
Feb  3 13:05:55.251: INFO: Got endpoints: latency-svc-h5c2f [1.897670231s]
Feb  3 13:05:55.361: INFO: Created: latency-svc-frqjg
Feb  3 13:05:55.429: INFO: Got endpoints: latency-svc-frqjg [1.907589222s]
Feb  3 13:05:55.469: INFO: Created: latency-svc-vbf4c
Feb  3 13:05:55.480: INFO: Got endpoints: latency-svc-vbf4c [1.895911193s]
Feb  3 13:05:55.761: INFO: Created: latency-svc-59pb8
Feb  3 13:05:55.783: INFO: Got endpoints: latency-svc-59pb8 [1.970373673s]
Feb  3 13:05:55.831: INFO: Created: latency-svc-zzhh7
Feb  3 13:05:56.040: INFO: Got endpoints: latency-svc-zzhh7 [2.100580254s]
Feb  3 13:05:56.052: INFO: Created: latency-svc-42brp
Feb  3 13:05:56.078: INFO: Got endpoints: latency-svc-42brp [2.106877383s]
Feb  3 13:05:56.308: INFO: Created: latency-svc-9h62h
Feb  3 13:05:56.318: INFO: Got endpoints: latency-svc-9h62h [2.170844239s]
Feb  3 13:05:56.391: INFO: Created: latency-svc-dqrhd
Feb  3 13:05:56.663: INFO: Got endpoints: latency-svc-dqrhd [2.28731194s]
Feb  3 13:05:56.677: INFO: Created: latency-svc-bczs8
Feb  3 13:05:56.708: INFO: Got endpoints: latency-svc-bczs8 [2.238334606s]
Feb  3 13:05:56.966: INFO: Created: latency-svc-sb7qx
Feb  3 13:05:56.987: INFO: Got endpoints: latency-svc-sb7qx [2.240376305s]
Feb  3 13:05:57.257: INFO: Created: latency-svc-qlpnk
Feb  3 13:05:57.284: INFO: Got endpoints: latency-svc-qlpnk [2.457277929s]
Feb  3 13:05:57.502: INFO: Created: latency-svc-j7hgt
Feb  3 13:05:57.529: INFO: Got endpoints: latency-svc-j7hgt [2.672759217s]
Feb  3 13:05:57.764: INFO: Created: latency-svc-tsqzz
Feb  3 13:05:57.772: INFO: Got endpoints: latency-svc-tsqzz [2.782748133s]
Feb  3 13:05:58.045: INFO: Created: latency-svc-wp5gs
Feb  3 13:05:58.062: INFO: Got endpoints: latency-svc-wp5gs [3.043424272s]
Feb  3 13:05:58.121: INFO: Created: latency-svc-stmqr
Feb  3 13:05:58.337: INFO: Got endpoints: latency-svc-stmqr [3.262477518s]
Feb  3 13:05:58.367: INFO: Created: latency-svc-qsk89
Feb  3 13:05:58.412: INFO: Got endpoints: latency-svc-qsk89 [3.15984421s]
Feb  3 13:05:58.623: INFO: Created: latency-svc-hf8gz
Feb  3 13:05:58.643: INFO: Got endpoints: latency-svc-hf8gz [3.213263592s]
Feb  3 13:05:58.816: INFO: Created: latency-svc-sd7mf
Feb  3 13:05:58.844: INFO: Got endpoints: latency-svc-sd7mf [3.363659851s]
Feb  3 13:05:59.025: INFO: Created: latency-svc-jb4d9
Feb  3 13:05:59.037: INFO: Got endpoints: latency-svc-jb4d9 [3.253388928s]
Feb  3 13:05:59.248: INFO: Created: latency-svc-gxqlw
Feb  3 13:05:59.276: INFO: Got endpoints: latency-svc-gxqlw [3.235967258s]
Feb  3 13:05:59.491: INFO: Created: latency-svc-rmj4g
Feb  3 13:05:59.652: INFO: Got endpoints: latency-svc-rmj4g [3.574219938s]
Feb  3 13:05:59.681: INFO: Created: latency-svc-r7vkh
Feb  3 13:05:59.709: INFO: Got endpoints: latency-svc-r7vkh [3.391237283s]
Feb  3 13:05:59.895: INFO: Created: latency-svc-4wg29
Feb  3 13:05:59.917: INFO: Got endpoints: latency-svc-4wg29 [3.253757871s]
Feb  3 13:05:59.994: INFO: Created: latency-svc-ll52f
Feb  3 13:06:00.104: INFO: Got endpoints: latency-svc-ll52f [3.396198815s]
Feb  3 13:06:00.165: INFO: Created: latency-svc-n946h
Feb  3 13:06:00.220: INFO: Got endpoints: latency-svc-n946h [3.232145559s]
Feb  3 13:06:00.420: INFO: Created: latency-svc-kfqxg
Feb  3 13:06:00.444: INFO: Got endpoints: latency-svc-kfqxg [3.159995418s]
Feb  3 13:06:00.701: INFO: Created: latency-svc-9jjcq
Feb  3 13:06:00.854: INFO: Got endpoints: latency-svc-9jjcq [3.32544099s]
Feb  3 13:06:00.874: INFO: Created: latency-svc-tlcrb
Feb  3 13:06:01.093: INFO: Got endpoints: latency-svc-tlcrb [3.320487501s]
Feb  3 13:06:01.143: INFO: Created: latency-svc-bjnxz
Feb  3 13:06:01.181: INFO: Got endpoints: latency-svc-bjnxz [3.119551901s]
Feb  3 13:06:01.417: INFO: Created: latency-svc-pfjs8
Feb  3 13:06:01.447: INFO: Got endpoints: latency-svc-pfjs8 [354.184356ms]
Feb  3 13:06:01.719: INFO: Created: latency-svc-sgbls
Feb  3 13:06:01.751: INFO: Got endpoints: latency-svc-sgbls [3.413627895s]
Feb  3 13:06:02.040: INFO: Created: latency-svc-jdc2j
Feb  3 13:06:02.077: INFO: Got endpoints: latency-svc-jdc2j [3.665297568s]
Feb  3 13:06:02.150: INFO: Created: latency-svc-p69qs
Feb  3 13:06:02.388: INFO: Created: latency-svc-crgr2
Feb  3 13:06:02.739: INFO: Got endpoints: latency-svc-p69qs [4.095477941s]
Feb  3 13:06:02.741: INFO: Got endpoints: latency-svc-crgr2 [3.896706908s]
Feb  3 13:06:03.016: INFO: Created: latency-svc-v5r89
Feb  3 13:06:03.042: INFO: Got endpoints: latency-svc-v5r89 [4.005087281s]
Feb  3 13:06:03.088: INFO: Created: latency-svc-zwpff
Feb  3 13:06:03.102: INFO: Got endpoints: latency-svc-zwpff [3.825202417s]
Feb  3 13:06:03.337: INFO: Created: latency-svc-th8vs
Feb  3 13:06:03.361: INFO: Got endpoints: latency-svc-th8vs [3.708851088s]
Feb  3 13:06:03.674: INFO: Created: latency-svc-996d2
Feb  3 13:06:03.692: INFO: Got endpoints: latency-svc-996d2 [3.981924814s]
Feb  3 13:06:03.950: INFO: Created: latency-svc-txmnm
Feb  3 13:06:03.962: INFO: Got endpoints: latency-svc-txmnm [4.044567288s]
Feb  3 13:06:04.163: INFO: Created: latency-svc-2mw9j
Feb  3 13:06:04.218: INFO: Got endpoints: latency-svc-2mw9j [4.113221546s]
Feb  3 13:06:04.361: INFO: Created: latency-svc-kdlqf
Feb  3 13:06:04.405: INFO: Got endpoints: latency-svc-kdlqf [4.185206266s]
Feb  3 13:06:04.609: INFO: Created: latency-svc-vpc2p
Feb  3 13:06:04.669: INFO: Got endpoints: latency-svc-vpc2p [4.224424336s]
Feb  3 13:06:04.953: INFO: Created: latency-svc-btsv8
Feb  3 13:06:05.007: INFO: Got endpoints: latency-svc-btsv8 [4.1522439s]
Feb  3 13:06:05.012: INFO: Created: latency-svc-wkfwc
Feb  3 13:06:05.342: INFO: Got endpoints: latency-svc-wkfwc [4.160394589s]
Feb  3 13:06:05.679: INFO: Created: latency-svc-p9ccn
Feb  3 13:06:05.779: INFO: Created: latency-svc-btggf
Feb  3 13:06:05.779: INFO: Got endpoints: latency-svc-p9ccn [4.331197987s]
Feb  3 13:06:05.983: INFO: Got endpoints: latency-svc-btggf [4.232416438s]
Feb  3 13:06:06.062: INFO: Created: latency-svc-gq6tl
Feb  3 13:06:06.286: INFO: Got endpoints: latency-svc-gq6tl [4.207949956s]
Feb  3 13:06:06.465: INFO: Created: latency-svc-pflpn
Feb  3 13:06:06.471: INFO: Got endpoints: latency-svc-pflpn [3.73164605s]
Feb  3 13:06:06.538: INFO: Created: latency-svc-d2gcd
Feb  3 13:06:06.686: INFO: Got endpoints: latency-svc-d2gcd [3.945075496s]
Feb  3 13:06:06.695: INFO: Created: latency-svc-lf7zp
Feb  3 13:06:06.704: INFO: Got endpoints: latency-svc-lf7zp [3.662362292s]
Feb  3 13:06:06.779: INFO: Created: latency-svc-ch4q6
Feb  3 13:06:06.890: INFO: Got endpoints: latency-svc-ch4q6 [3.78854527s]
Feb  3 13:06:06.900: INFO: Created: latency-svc-25vzf
Feb  3 13:06:06.908: INFO: Got endpoints: latency-svc-25vzf [3.546061476s]
Feb  3 13:06:06.956: INFO: Created: latency-svc-2dqlq
Feb  3 13:06:06.971: INFO: Got endpoints: latency-svc-2dqlq [3.27937424s]
Feb  3 13:06:07.086: INFO: Created: latency-svc-bhrxd
Feb  3 13:06:07.104: INFO: Got endpoints: latency-svc-bhrxd [3.142201051s]
Feb  3 13:06:07.164: INFO: Created: latency-svc-gmv6r
Feb  3 13:06:07.178: INFO: Got endpoints: latency-svc-gmv6r [2.959541297s]
Feb  3 13:06:07.296: INFO: Created: latency-svc-646j9
Feb  3 13:06:07.306: INFO: Got endpoints: latency-svc-646j9 [2.900238694s]
Feb  3 13:06:07.368: INFO: Created: latency-svc-ccpcd
Feb  3 13:06:07.469: INFO: Got endpoints: latency-svc-ccpcd [2.798723148s]
Feb  3 13:06:07.485: INFO: Created: latency-svc-295v2
Feb  3 13:06:07.493: INFO: Got endpoints: latency-svc-295v2 [2.485847617s]
Feb  3 13:06:07.554: INFO: Created: latency-svc-ldwnm
Feb  3 13:06:07.681: INFO: Got endpoints: latency-svc-ldwnm [2.338817959s]
Feb  3 13:06:07.711: INFO: Created: latency-svc-qwsrz
Feb  3 13:06:07.735: INFO: Got endpoints: latency-svc-qwsrz [1.95595888s]
Feb  3 13:06:07.890: INFO: Created: latency-svc-bszjr
Feb  3 13:06:07.984: INFO: Created: latency-svc-lcrcb
Feb  3 13:06:08.027: INFO: Got endpoints: latency-svc-bszjr [2.04364039s]
Feb  3 13:06:08.158: INFO: Got endpoints: latency-svc-lcrcb [1.872576385s]
Feb  3 13:06:08.181: INFO: Created: latency-svc-tjkg5
Feb  3 13:06:08.226: INFO: Got endpoints: latency-svc-tjkg5 [1.754717566s]
Feb  3 13:06:08.351: INFO: Created: latency-svc-kk5fz
Feb  3 13:06:08.359: INFO: Got endpoints: latency-svc-kk5fz [1.672314199s]
Feb  3 13:06:08.431: INFO: Created: latency-svc-r5nc7
Feb  3 13:06:08.439: INFO: Got endpoints: latency-svc-r5nc7 [1.73411595s]
Feb  3 13:06:08.573: INFO: Created: latency-svc-vcj8z
Feb  3 13:06:08.780: INFO: Got endpoints: latency-svc-vcj8z [1.889028834s]
Feb  3 13:06:08.809: INFO: Created: latency-svc-rwblh
Feb  3 13:06:08.835: INFO: Got endpoints: latency-svc-rwblh [1.926921951s]
Feb  3 13:06:08.974: INFO: Created: latency-svc-bsl6h
Feb  3 13:06:08.998: INFO: Got endpoints: latency-svc-bsl6h [2.026519842s]
Feb  3 13:06:09.046: INFO: Created: latency-svc-lw8b5
Feb  3 13:06:09.186: INFO: Got endpoints: latency-svc-lw8b5 [2.081389408s]
Feb  3 13:06:09.263: INFO: Created: latency-svc-q8w47
Feb  3 13:06:09.363: INFO: Got endpoints: latency-svc-q8w47 [2.184509499s]
Feb  3 13:06:09.445: INFO: Created: latency-svc-td2cq
Feb  3 13:06:09.584: INFO: Got endpoints: latency-svc-td2cq [2.278516469s]
Feb  3 13:06:09.679: INFO: Created: latency-svc-xr5rz
Feb  3 13:06:09.826: INFO: Got endpoints: latency-svc-xr5rz [2.35680588s]
Feb  3 13:06:09.845: INFO: Created: latency-svc-svgrp
Feb  3 13:06:09.875: INFO: Got endpoints: latency-svc-svgrp [2.381361128s]
Feb  3 13:06:09.914: INFO: Created: latency-svc-vt78s
Feb  3 13:06:10.070: INFO: Got endpoints: latency-svc-vt78s [2.388652925s]
Feb  3 13:06:10.109: INFO: Created: latency-svc-d5k7d
Feb  3 13:06:10.113: INFO: Got endpoints: latency-svc-d5k7d [2.378039835s]
Feb  3 13:06:10.383: INFO: Created: latency-svc-rsf7m
Feb  3 13:06:10.403: INFO: Got endpoints: latency-svc-rsf7m [2.374850952s]
Feb  3 13:06:10.630: INFO: Created: latency-svc-lnz2b
Feb  3 13:06:10.835: INFO: Got endpoints: latency-svc-lnz2b [2.675903102s]
Feb  3 13:06:10.855: INFO: Created: latency-svc-bwj4m
Feb  3 13:06:10.870: INFO: Got endpoints: latency-svc-bwj4m [2.643242441s]
Feb  3 13:06:11.077: INFO: Created: latency-svc-84s2x
Feb  3 13:06:11.093: INFO: Got endpoints: latency-svc-84s2x [2.73424682s]
Feb  3 13:06:11.150: INFO: Created: latency-svc-gthx8
Feb  3 13:06:11.171: INFO: Got endpoints: latency-svc-gthx8 [2.731583327s]
Feb  3 13:06:11.347: INFO: Created: latency-svc-vptcc
Feb  3 13:06:11.361: INFO: Got endpoints: latency-svc-vptcc [2.580773608s]
Feb  3 13:06:11.503: INFO: Created: latency-svc-spcrs
Feb  3 13:06:11.526: INFO: Got endpoints: latency-svc-spcrs [2.69067061s]
Feb  3 13:06:11.576: INFO: Created: latency-svc-9bj2k
Feb  3 13:06:11.726: INFO: Created: latency-svc-kn7ss
Feb  3 13:06:11.733: INFO: Got endpoints: latency-svc-9bj2k [2.735332333s]
Feb  3 13:06:11.747: INFO: Got endpoints: latency-svc-kn7ss [2.560867086s]
Feb  3 13:06:11.903: INFO: Created: latency-svc-k87xz
Feb  3 13:06:11.929: INFO: Got endpoints: latency-svc-k87xz [2.566087585s]
Feb  3 13:06:11.989: INFO: Created: latency-svc-chhjl
Feb  3 13:06:12.079: INFO: Got endpoints: latency-svc-chhjl [2.494531296s]
Feb  3 13:06:12.149: INFO: Created: latency-svc-t59ts
Feb  3 13:06:12.337: INFO: Got endpoints: latency-svc-t59ts [2.510855364s]
Feb  3 13:06:12.350: INFO: Created: latency-svc-6ntfj
Feb  3 13:06:12.599: INFO: Got endpoints: latency-svc-6ntfj [2.723758712s]
Feb  3 13:06:12.701: INFO: Created: latency-svc-7l6rq
Feb  3 13:06:12.875: INFO: Created: latency-svc-5h45p
Feb  3 13:06:12.889: INFO: Got endpoints: latency-svc-7l6rq [2.818495132s]
Feb  3 13:06:12.902: INFO: Got endpoints: latency-svc-5h45p [2.788786755s]
Feb  3 13:06:12.943: INFO: Created: latency-svc-kfqpl
Feb  3 13:06:13.031: INFO: Got endpoints: latency-svc-kfqpl [2.628479355s]
Feb  3 13:06:13.073: INFO: Created: latency-svc-s9ktd
Feb  3 13:06:13.082: INFO: Got endpoints: latency-svc-s9ktd [2.247182029s]
Feb  3 13:06:13.236: INFO: Created: latency-svc-q8pjj
Feb  3 13:06:13.273: INFO: Got endpoints: latency-svc-q8pjj [2.40264599s]
Feb  3 13:06:13.324: INFO: Created: latency-svc-tgckp
Feb  3 13:06:13.427: INFO: Got endpoints: latency-svc-tgckp [2.333777806s]
Feb  3 13:06:13.444: INFO: Created: latency-svc-xn2wq
Feb  3 13:06:13.458: INFO: Got endpoints: latency-svc-xn2wq [2.28714591s]
Feb  3 13:06:13.522: INFO: Created: latency-svc-l7hxn
Feb  3 13:06:13.644: INFO: Got endpoints: latency-svc-l7hxn [2.283229611s]
Feb  3 13:06:13.685: INFO: Created: latency-svc-r9tfc
Feb  3 13:06:13.860: INFO: Got endpoints: latency-svc-r9tfc [2.333632746s]
Feb  3 13:06:13.895: INFO: Created: latency-svc-mqxdg
Feb  3 13:06:13.920: INFO: Got endpoints: latency-svc-mqxdg [2.186107398s]
Feb  3 13:06:14.219: INFO: Created: latency-svc-m74rg
Feb  3 13:06:14.226: INFO: Got endpoints: latency-svc-m74rg [2.478791975s]
Feb  3 13:06:14.629: INFO: Created: latency-svc-zwghz
Feb  3 13:06:14.681: INFO: Got endpoints: latency-svc-zwghz [2.751299051s]
Feb  3 13:06:14.775: INFO: Created: latency-svc-v7655
Feb  3 13:06:14.811: INFO: Got endpoints: latency-svc-v7655 [2.73103081s]
Feb  3 13:06:14.879: INFO: Created: latency-svc-mfdqv
Feb  3 13:06:14.950: INFO: Got endpoints: latency-svc-mfdqv [2.612705263s]
Feb  3 13:06:14.986: INFO: Created: latency-svc-7njbz
Feb  3 13:06:14.994: INFO: Got endpoints: latency-svc-7njbz [2.394733322s]
Feb  3 13:06:15.186: INFO: Created: latency-svc-gmx94
Feb  3 13:06:15.201: INFO: Got endpoints: latency-svc-gmx94 [2.310792358s]
Feb  3 13:06:15.262: INFO: Created: latency-svc-dk24f
Feb  3 13:06:15.366: INFO: Got endpoints: latency-svc-dk24f [2.463736637s]
Feb  3 13:06:15.392: INFO: Created: latency-svc-k66tx
Feb  3 13:06:15.426: INFO: Got endpoints: latency-svc-k66tx [2.394124198s]
Feb  3 13:06:15.656: INFO: Created: latency-svc-kfb47
Feb  3 13:06:15.682: INFO: Got endpoints: latency-svc-kfb47 [2.59998061s]
Feb  3 13:06:15.913: INFO: Created: latency-svc-6mmwg
Feb  3 13:06:15.939: INFO: Got endpoints: latency-svc-6mmwg [2.666207943s]
Feb  3 13:06:16.006: INFO: Created: latency-svc-tr25r
Feb  3 13:06:16.112: INFO: Got endpoints: latency-svc-tr25r [2.684603515s]
Feb  3 13:06:16.136: INFO: Created: latency-svc-t25sk
Feb  3 13:06:16.157: INFO: Got endpoints: latency-svc-t25sk [2.698527262s]
Feb  3 13:06:16.320: INFO: Created: latency-svc-spg7z
Feb  3 13:06:16.321: INFO: Got endpoints: latency-svc-spg7z [2.676203814s]
Feb  3 13:06:16.375: INFO: Created: latency-svc-z6bt4
Feb  3 13:06:16.562: INFO: Got endpoints: latency-svc-z6bt4 [2.70117238s]
Feb  3 13:06:16.636: INFO: Created: latency-svc-5gqb2
Feb  3 13:06:16.738: INFO: Got endpoints: latency-svc-5gqb2 [2.817697455s]
Feb  3 13:06:16.776: INFO: Created: latency-svc-hd798
Feb  3 13:06:16.819: INFO: Got endpoints: latency-svc-hd798 [2.592826192s]
Feb  3 13:06:16.958: INFO: Created: latency-svc-fqrgl
Feb  3 13:06:17.001: INFO: Got endpoints: latency-svc-fqrgl [2.319888849s]
Feb  3 13:06:17.156: INFO: Created: latency-svc-cbk4r
Feb  3 13:06:17.180: INFO: Got endpoints: latency-svc-cbk4r [2.368665075s]
Feb  3 13:06:17.319: INFO: Created: latency-svc-855bd
Feb  3 13:06:17.360: INFO: Got endpoints: latency-svc-855bd [2.410057628s]
Feb  3 13:06:17.505: INFO: Created: latency-svc-hf2jn
Feb  3 13:06:17.542: INFO: Got endpoints: latency-svc-hf2jn [2.548308256s]
Feb  3 13:06:17.760: INFO: Created: latency-svc-4ktjw
Feb  3 13:06:17.779: INFO: Got endpoints: latency-svc-4ktjw [2.578318501s]
Feb  3 13:06:17.905: INFO: Created: latency-svc-wkcjp
Feb  3 13:06:17.930: INFO: Got endpoints: latency-svc-wkcjp [2.56341117s]
Feb  3 13:06:17.996: INFO: Created: latency-svc-9xs9z
Feb  3 13:06:18.143: INFO: Got endpoints: latency-svc-9xs9z [2.716961592s]
Feb  3 13:06:18.164: INFO: Created: latency-svc-nf2zf
Feb  3 13:06:18.180: INFO: Got endpoints: latency-svc-nf2zf [2.497681457s]
Feb  3 13:06:18.336: INFO: Created: latency-svc-km46h
Feb  3 13:06:18.362: INFO: Got endpoints: latency-svc-km46h [2.422858935s]
Feb  3 13:06:18.603: INFO: Created: latency-svc-k9q98
Feb  3 13:06:18.619: INFO: Got endpoints: latency-svc-k9q98 [2.507138198s]
Feb  3 13:06:18.720: INFO: Created: latency-svc-rtmkj
Feb  3 13:06:18.865: INFO: Got endpoints: latency-svc-rtmkj [2.708048861s]
Feb  3 13:06:18.897: INFO: Created: latency-svc-g5cfn
Feb  3 13:06:18.924: INFO: Got endpoints: latency-svc-g5cfn [2.602564969s]
Feb  3 13:06:19.082: INFO: Created: latency-svc-2d4dn
Feb  3 13:06:19.114: INFO: Got endpoints: latency-svc-2d4dn [2.551176382s]
Feb  3 13:06:19.253: INFO: Created: latency-svc-hfkfl
Feb  3 13:06:19.272: INFO: Got endpoints: latency-svc-hfkfl [2.534305996s]
Feb  3 13:06:19.331: INFO: Created: latency-svc-7gkgn
Feb  3 13:06:19.338: INFO: Got endpoints: latency-svc-7gkgn [2.518734341s]
Feb  3 13:06:19.507: INFO: Created: latency-svc-nlclq
Feb  3 13:06:19.541: INFO: Got endpoints: latency-svc-nlclq [2.540118409s]
Feb  3 13:06:19.693: INFO: Created: latency-svc-n78xj
Feb  3 13:06:19.756: INFO: Got endpoints: latency-svc-n78xj [2.575773583s]
Feb  3 13:06:19.888: INFO: Created: latency-svc-zslb7
Feb  3 13:06:19.910: INFO: Got endpoints: latency-svc-zslb7 [2.548879643s]
Feb  3 13:06:20.110: INFO: Created: latency-svc-fs97v
Feb  3 13:06:20.112: INFO: Got endpoints: latency-svc-fs97v [2.56912734s]
Feb  3 13:06:21.477: INFO: Created: latency-svc-rh5ml
Feb  3 13:06:21.512: INFO: Got endpoints: latency-svc-rh5ml [3.732934088s]
Feb  3 13:06:21.696: INFO: Created: latency-svc-zqz49
Feb  3 13:06:21.714: INFO: Got endpoints: latency-svc-zqz49 [3.783946389s]
Feb  3 13:06:21.875: INFO: Created: latency-svc-69zl9
Feb  3 13:06:21.889: INFO: Got endpoints: latency-svc-69zl9 [3.74544252s]
Feb  3 13:06:22.047: INFO: Created: latency-svc-fghn2
Feb  3 13:06:22.066: INFO: Got endpoints: latency-svc-fghn2 [3.885310283s]
Feb  3 13:06:22.133: INFO: Created: latency-svc-6kcmw
Feb  3 13:06:22.250: INFO: Got endpoints: latency-svc-6kcmw [3.887432999s]
Feb  3 13:06:22.277: INFO: Created: latency-svc-jlk9q
Feb  3 13:06:22.440: INFO: Created: latency-svc-2mzxh
Feb  3 13:06:22.683: INFO: Got endpoints: latency-svc-jlk9q [4.063609004s]
Feb  3 13:06:22.692: INFO: Created: latency-svc-vvnqz
Feb  3 13:06:22.703: INFO: Got endpoints: latency-svc-2mzxh [3.83732529s]
Feb  3 13:06:22.709: INFO: Got endpoints: latency-svc-vvnqz [3.785447108s]
Feb  3 13:06:22.768: INFO: Created: latency-svc-thl4m
Feb  3 13:06:22.885: INFO: Got endpoints: latency-svc-thl4m [3.770600552s]
Feb  3 13:06:22.933: INFO: Created: latency-svc-kd998
Feb  3 13:06:22.942: INFO: Got endpoints: latency-svc-kd998 [3.669418778s]
Feb  3 13:06:23.093: INFO: Created: latency-svc-j2mxh
Feb  3 13:06:23.099: INFO: Got endpoints: latency-svc-j2mxh [3.760707687s]
Feb  3 13:06:23.156: INFO: Created: latency-svc-wjks8
Feb  3 13:06:23.168: INFO: Got endpoints: latency-svc-wjks8 [3.626168843s]
Feb  3 13:06:23.324: INFO: Created: latency-svc-zz8q8
Feb  3 13:06:23.342: INFO: Got endpoints: latency-svc-zz8q8 [3.586070802s]
Feb  3 13:06:23.462: INFO: Created: latency-svc-lq79r
Feb  3 13:06:23.474: INFO: Got endpoints: latency-svc-lq79r [3.564547687s]
Feb  3 13:06:23.635: INFO: Created: latency-svc-gmpxb
Feb  3 13:06:23.664: INFO: Created: latency-svc-xvb8k
Feb  3 13:06:23.674: INFO: Got endpoints: latency-svc-gmpxb [3.561262092s]
Feb  3 13:06:23.701: INFO: Got endpoints: latency-svc-xvb8k [2.188034673s]
Feb  3 13:06:23.809: INFO: Created: latency-svc-c24sq
Feb  3 13:06:23.853: INFO: Got endpoints: latency-svc-c24sq [2.138282525s]
Feb  3 13:06:24.057: INFO: Created: latency-svc-pgz4c
Feb  3 13:06:24.097: INFO: Got endpoints: latency-svc-pgz4c [2.207151998s]
Feb  3 13:06:24.344: INFO: Created: latency-svc-fdtww
Feb  3 13:06:24.554: INFO: Got endpoints: latency-svc-fdtww [2.487393495s]
Feb  3 13:06:24.583: INFO: Created: latency-svc-k7jjk
Feb  3 13:06:24.602: INFO: Got endpoints: latency-svc-k7jjk [2.352159266s]
Feb  3 13:06:24.832: INFO: Created: latency-svc-zld8h
Feb  3 13:06:24.871: INFO: Got endpoints: latency-svc-zld8h [2.187461447s]
Feb  3 13:06:25.018: INFO: Created: latency-svc-jk6bj
Feb  3 13:06:25.021: INFO: Got endpoints: latency-svc-jk6bj [2.317938524s]
Feb  3 13:06:25.168: INFO: Created: latency-svc-97n5q
Feb  3 13:06:25.175: INFO: Got endpoints: latency-svc-97n5q [2.465517043s]
Feb  3 13:06:25.235: INFO: Created: latency-svc-7r65k
Feb  3 13:06:25.355: INFO: Got endpoints: latency-svc-7r65k [2.470315103s]
Feb  3 13:06:25.814: INFO: Created: latency-svc-lbz98
Feb  3 13:06:25.877: INFO: Got endpoints: latency-svc-lbz98 [2.934464249s]
Feb  3 13:06:26.404: INFO: Created: latency-svc-zwxcg
Feb  3 13:06:26.456: INFO: Got endpoints: latency-svc-zwxcg [3.356651787s]
Feb  3 13:06:26.523: INFO: Created: latency-svc-jhxwf
Feb  3 13:06:26.707: INFO: Got endpoints: latency-svc-jhxwf [3.538608955s]
Feb  3 13:06:26.751: INFO: Created: latency-svc-8d494
Feb  3 13:06:26.768: INFO: Got endpoints: latency-svc-8d494 [3.425759197s]
Feb  3 13:06:26.903: INFO: Created: latency-svc-d2sb5
Feb  3 13:06:26.927: INFO: Got endpoints: latency-svc-d2sb5 [3.452495596s]
Feb  3 13:06:26.987: INFO: Created: latency-svc-rzc7l
Feb  3 13:06:27.101: INFO: Got endpoints: latency-svc-rzc7l [3.427133843s]
Feb  3 13:06:27.145: INFO: Created: latency-svc-vp8vs
Feb  3 13:06:27.181: INFO: Got endpoints: latency-svc-vp8vs [3.480109071s]
Feb  3 13:06:27.316: INFO: Created: latency-svc-r7sps
Feb  3 13:06:27.344: INFO: Got endpoints: latency-svc-r7sps [3.490544987s]
Feb  3 13:06:27.527: INFO: Created: latency-svc-vkf5p
Feb  3 13:06:27.539: INFO: Got endpoints: latency-svc-vkf5p [3.442386291s]
Feb  3 13:06:27.569: INFO: Created: latency-svc-zpndm
Feb  3 13:06:27.705: INFO: Got endpoints: latency-svc-zpndm [3.150654809s]
Feb  3 13:06:27.726: INFO: Created: latency-svc-dhklt
Feb  3 13:06:27.729: INFO: Got endpoints: latency-svc-dhklt [3.12685865s]
Feb  3 13:06:27.806: INFO: Created: latency-svc-7df2n
Feb  3 13:06:27.885: INFO: Got endpoints: latency-svc-7df2n [3.013843344s]
Feb  3 13:06:27.954: INFO: Created: latency-svc-rwww7
Feb  3 13:06:27.981: INFO: Got endpoints: latency-svc-rwww7 [2.960262977s]
Feb  3 13:06:28.103: INFO: Created: latency-svc-vxmfj
Feb  3 13:06:28.128: INFO: Got endpoints: latency-svc-vxmfj [2.952938304s]
Feb  3 13:06:28.231: INFO: Created: latency-svc-6xxwm
Feb  3 13:06:28.336: INFO: Got endpoints: latency-svc-6xxwm [2.97987132s]
Feb  3 13:06:28.351: INFO: Created: latency-svc-sjr9z
Feb  3 13:06:28.390: INFO: Got endpoints: latency-svc-sjr9z [2.512896568s]
Feb  3 13:06:28.449: INFO: Created: latency-svc-xkpcz
Feb  3 13:06:28.577: INFO: Got endpoints: latency-svc-xkpcz [2.121247612s]
Feb  3 13:06:28.627: INFO: Created: latency-svc-jtbjg
Feb  3 13:06:28.769: INFO: Got endpoints: latency-svc-jtbjg [2.062196092s]
Feb  3 13:06:28.791: INFO: Created: latency-svc-hcr5s
Feb  3 13:06:28.812: INFO: Got endpoints: latency-svc-hcr5s [2.043701701s]
Feb  3 13:06:28.881: INFO: Created: latency-svc-hpk69
Feb  3 13:06:29.005: INFO: Got endpoints: latency-svc-hpk69 [2.077278959s]
Feb  3 13:06:29.034: INFO: Created: latency-svc-n8qzb
Feb  3 13:06:29.048: INFO: Got endpoints: latency-svc-n8qzb [1.946351566s]
Feb  3 13:06:29.048: INFO: Latencies: [243.106364ms 354.184356ms 493.472926ms 644.122355ms 816.563876ms 983.330054ms 1.02751278s 1.267283758s 1.416898898s 1.585761238s 1.648192358s 1.672314199s 1.73411595s 1.754717566s 1.870624508s 1.872576385s 1.875467661s 1.889028834s 1.895911193s 1.897670231s 1.907589222s 1.926921951s 1.946351566s 1.95595888s 1.970373673s 2.003822398s 2.026519842s 2.034886265s 2.04364039s 2.043701701s 2.054851272s 2.062196092s 2.070631732s 2.077278959s 2.081389408s 2.100580254s 2.104265702s 2.106877383s 2.121247612s 2.138282525s 2.170844239s 2.184509499s 2.186107398s 2.187461447s 2.188034673s 2.207151998s 2.211140665s 2.238334606s 2.240376305s 2.246420427s 2.247182029s 2.278516469s 2.283229611s 2.28714591s 2.28731194s 2.291308121s 2.310792358s 2.317938524s 2.319888849s 2.320523179s 2.333632746s 2.333777806s 2.338817959s 2.352159266s 2.35680588s 2.368665075s 2.374850952s 2.378039835s 2.381361128s 2.388652925s 2.394124198s 2.394733322s 2.40264599s 2.410057628s 2.422858935s 2.438846641s 2.457277929s 2.463736637s 2.465517043s 2.470315103s 2.478791975s 2.485847617s 2.487393495s 2.494531296s 2.497681457s 2.507138198s 2.510855364s 2.512896568s 2.518734341s 2.534305996s 2.540118409s 2.548308256s 2.548879643s 2.551176382s 2.560867086s 2.56341117s 2.566087585s 2.56912734s 2.575773583s 2.578318501s 2.580773608s 2.592826192s 2.59998061s 2.602564969s 2.612705263s 2.628479355s 2.643242441s 2.666207943s 2.672759217s 2.675903102s 2.676203814s 2.684603515s 2.69067061s 2.698527262s 2.70117238s 2.708048861s 2.716961592s 2.723758712s 2.73103081s 2.731583327s 2.73424682s 2.735332333s 2.751299051s 2.782748133s 2.788786755s 2.798723148s 2.817697455s 2.818495132s 2.900238694s 2.934464249s 2.952938304s 2.959541297s 2.960262977s 2.97987132s 3.013843344s 3.043424272s 3.119551901s 3.12685865s 3.142201051s 3.150654809s 3.15984421s 3.159995418s 3.213263592s 3.232145559s 3.235967258s 3.253388928s 3.253757871s 3.262477518s 3.27937424s 3.320487501s 3.32544099s 3.356651787s 3.363659851s 3.391237283s 3.396198815s 3.413627895s 3.425759197s 3.427133843s 3.442386291s 3.452495596s 3.480109071s 3.490544987s 3.538608955s 3.546061476s 3.561262092s 3.564547687s 3.574219938s 3.586070802s 3.626168843s 3.662362292s 3.665297568s 3.669418778s 3.708851088s 3.73164605s 3.732934088s 3.74544252s 3.760707687s 3.770600552s 3.783946389s 3.785447108s 3.78854527s 3.825202417s 3.83732529s 3.885310283s 3.887432999s 3.896706908s 3.945075496s 3.981924814s 4.005087281s 4.044567288s 4.063609004s 4.095477941s 4.113221546s 4.1522439s 4.160394589s 4.185206266s 4.207949956s 4.224424336s 4.232416438s 4.331197987s]
Feb  3 13:06:29.048: INFO: 50 %ile: 2.580773608s
Feb  3 13:06:29.048: INFO: 90 %ile: 3.78854527s
Feb  3 13:06:29.048: INFO: 99 %ile: 4.232416438s
Feb  3 13:06:29.048: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:06:29.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-p9dhh" for this suite.
Feb  3 13:07:29.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:07:29.411: INFO: namespace: e2e-tests-svc-latency-p9dhh, resource: bindings, ignored listing per whitelist
Feb  3 13:07:29.475: INFO: namespace e2e-tests-svc-latency-p9dhh deletion completed in 1m0.41279614s

• [SLOW TEST:107.125 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:07:29.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-21e7fe8a-4686-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  3 13:07:29.682: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-21e8bdb0-4686-11ea-ab15-0242ac110005" in namespace "e2e-tests-projected-vvzm2" to be "success or failure"
Feb  3 13:07:29.707: INFO: Pod "pod-projected-secrets-21e8bdb0-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.370433ms
Feb  3 13:07:32.037: INFO: Pod "pod-projected-secrets-21e8bdb0-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.355457478s
Feb  3 13:07:34.068: INFO: Pod "pod-projected-secrets-21e8bdb0-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.38599317s
Feb  3 13:07:36.083: INFO: Pod "pod-projected-secrets-21e8bdb0-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.401107174s
Feb  3 13:07:38.230: INFO: Pod "pod-projected-secrets-21e8bdb0-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548314479s
Feb  3 13:07:40.242: INFO: Pod "pod-projected-secrets-21e8bdb0-4686-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.56072769s
STEP: Saw pod success
Feb  3 13:07:40.243: INFO: Pod "pod-projected-secrets-21e8bdb0-4686-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 13:07:40.246: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-21e8bdb0-4686-11ea-ab15-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  3 13:07:40.734: INFO: Waiting for pod pod-projected-secrets-21e8bdb0-4686-11ea-ab15-0242ac110005 to disappear
Feb  3 13:07:40.944: INFO: Pod pod-projected-secrets-21e8bdb0-4686-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:07:40.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vvzm2" for this suite.
Feb  3 13:07:47.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:07:47.050: INFO: namespace: e2e-tests-projected-vvzm2, resource: bindings, ignored listing per whitelist
Feb  3 13:07:47.218: INFO: namespace e2e-tests-projected-vvzm2 deletion completed in 6.254124519s

• [SLOW TEST:17.742 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:07:47.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  3 13:08:00.262: INFO: Successfully updated pod "pod-update-2c987189-4686-11ea-ab15-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Feb  3 13:08:00.356: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:08:00.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-tb69n" for this suite.
Feb  3 13:08:26.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:08:26.713: INFO: namespace: e2e-tests-pods-tb69n, resource: bindings, ignored listing per whitelist
Feb  3 13:08:26.870: INFO: namespace e2e-tests-pods-tb69n deletion completed in 26.492263755s

• [SLOW TEST:39.652 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:08:26.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  3 13:08:27.208: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4430619c-4686-11ea-ab15-0242ac110005" in namespace "e2e-tests-downward-api-tccqt" to be "success or failure"
Feb  3 13:08:27.231: INFO: Pod "downwardapi-volume-4430619c-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.440562ms
Feb  3 13:08:29.270: INFO: Pod "downwardapi-volume-4430619c-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062824275s
Feb  3 13:08:31.292: INFO: Pod "downwardapi-volume-4430619c-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084791526s
Feb  3 13:08:33.392: INFO: Pod "downwardapi-volume-4430619c-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183951407s
Feb  3 13:08:35.412: INFO: Pod "downwardapi-volume-4430619c-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.204073159s
Feb  3 13:08:37.426: INFO: Pod "downwardapi-volume-4430619c-4686-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.218427065s
STEP: Saw pod success
Feb  3 13:08:37.426: INFO: Pod "downwardapi-volume-4430619c-4686-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 13:08:37.430: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4430619c-4686-11ea-ab15-0242ac110005 container client-container: 
STEP: delete the pod
Feb  3 13:08:38.191: INFO: Waiting for pod downwardapi-volume-4430619c-4686-11ea-ab15-0242ac110005 to disappear
Feb  3 13:08:38.665: INFO: Pod downwardapi-volume-4430619c-4686-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:08:38.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tccqt" for this suite.
Feb  3 13:08:44.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:08:44.806: INFO: namespace: e2e-tests-downward-api-tccqt, resource: bindings, ignored listing per whitelist
Feb  3 13:08:44.876: INFO: namespace e2e-tests-downward-api-tccqt deletion completed in 6.199376979s

• [SLOW TEST:18.006 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:08:44.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  3 13:08:45.111: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  3 13:08:45.121: INFO: Waiting for terminating namespaces to be deleted...
Feb  3 13:08:45.125: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  3 13:08:45.138: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  3 13:08:45.138: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  3 13:08:45.138: INFO: 	Container coredns ready: true, restart count 0
Feb  3 13:08:45.138: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  3 13:08:45.138: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  3 13:08:45.138: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  3 13:08:45.138: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  3 13:08:45.138: INFO: 	Container weave ready: true, restart count 0
Feb  3 13:08:45.138: INFO: 	Container weave-npc ready: true, restart count 0
Feb  3 13:08:45.138: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  3 13:08:45.138: INFO: 	Container coredns ready: true, restart count 0
Feb  3 13:08:45.138: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  3 13:08:45.138: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15efe667363e4255], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:08:46.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-sjmbm" for this suite.
Feb  3 13:08:52.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:08:52.390: INFO: namespace: e2e-tests-sched-pred-sjmbm, resource: bindings, ignored listing per whitelist
Feb  3 13:08:52.470: INFO: namespace e2e-tests-sched-pred-sjmbm deletion completed in 6.257099386s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.594 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:08:52.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-5362d0df-4686-11ea-ab15-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  3 13:08:52.694: INFO: Waiting up to 5m0s for pod "pod-secrets-536377a7-4686-11ea-ab15-0242ac110005" in namespace "e2e-tests-secrets-bzphv" to be "success or failure"
Feb  3 13:08:52.701: INFO: Pod "pod-secrets-536377a7-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.817732ms
Feb  3 13:08:54.713: INFO: Pod "pod-secrets-536377a7-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019423365s
Feb  3 13:08:56.765: INFO: Pod "pod-secrets-536377a7-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071516359s
Feb  3 13:08:59.186: INFO: Pod "pod-secrets-536377a7-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.492617574s
Feb  3 13:09:01.225: INFO: Pod "pod-secrets-536377a7-4686-11ea-ab15-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531487857s
Feb  3 13:09:03.914: INFO: Pod "pod-secrets-536377a7-4686-11ea-ab15-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.220750733s
STEP: Saw pod success
Feb  3 13:09:03.915: INFO: Pod "pod-secrets-536377a7-4686-11ea-ab15-0242ac110005" satisfied condition "success or failure"
Feb  3 13:09:04.364: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-536377a7-4686-11ea-ab15-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  3 13:09:04.499: INFO: Waiting for pod pod-secrets-536377a7-4686-11ea-ab15-0242ac110005 to disappear
Feb  3 13:09:04.568: INFO: Pod pod-secrets-536377a7-4686-11ea-ab15-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:09:04.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bzphv" for this suite.
Feb  3 13:09:10.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:09:10.757: INFO: namespace: e2e-tests-secrets-bzphv, resource: bindings, ignored listing per whitelist
Feb  3 13:09:10.842: INFO: namespace e2e-tests-secrets-bzphv deletion completed in 6.258196733s

• [SLOW TEST:18.370 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:09:10.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  3 13:09:11.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-prwcp'
Feb  3 13:09:13.243: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 13:09:13.244: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb  3 13:09:13.289: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb  3 13:09:13.350: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  3 13:09:13.533: INFO: scanned /root for discovery docs: 
Feb  3 13:09:13.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-prwcp'
Feb  3 13:09:38.011: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  3 13:09:38.012: INFO: stdout: "Created e2e-test-nginx-rc-15bcc69dbff25c7f0949a440a48d8191\nScaling up e2e-test-nginx-rc-15bcc69dbff25c7f0949a440a48d8191 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-15bcc69dbff25c7f0949a440a48d8191 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-15bcc69dbff25c7f0949a440a48d8191 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  3 13:09:38.012: INFO: stdout: "Created e2e-test-nginx-rc-15bcc69dbff25c7f0949a440a48d8191\nScaling up e2e-test-nginx-rc-15bcc69dbff25c7f0949a440a48d8191 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-15bcc69dbff25c7f0949a440a48d8191 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-15bcc69dbff25c7f0949a440a48d8191 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  3 13:09:38.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-prwcp'
Feb  3 13:09:38.229: INFO: stderr: ""
Feb  3 13:09:38.229: INFO: stdout: "e2e-test-nginx-rc-15bcc69dbff25c7f0949a440a48d8191-7nkjg e2e-test-nginx-rc-48nxb "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 13:09:43.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-prwcp'
Feb  3 13:09:43.443: INFO: stderr: ""
Feb  3 13:09:43.443: INFO: stdout: "e2e-test-nginx-rc-15bcc69dbff25c7f0949a440a48d8191-7nkjg "
Feb  3 13:09:43.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-15bcc69dbff25c7f0949a440a48d8191-7nkjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-prwcp'
Feb  3 13:09:43.577: INFO: stderr: ""
Feb  3 13:09:43.578: INFO: stdout: "true"
Feb  3 13:09:43.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-15bcc69dbff25c7f0949a440a48d8191-7nkjg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-prwcp'
Feb  3 13:09:43.683: INFO: stderr: ""
Feb  3 13:09:43.683: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  3 13:09:43.683: INFO: e2e-test-nginx-rc-15bcc69dbff25c7f0949a440a48d8191-7nkjg is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Feb  3 13:09:43.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-prwcp'
Feb  3 13:09:43.870: INFO: stderr: ""
Feb  3 13:09:43.871: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:09:43.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-prwcp" for this suite.
Feb  3 13:10:08.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:10:08.136: INFO: namespace: e2e-tests-kubectl-prwcp, resource: bindings, ignored listing per whitelist
Feb  3 13:10:08.204: INFO: namespace e2e-tests-kubectl-prwcp deletion completed in 24.310183456s

• [SLOW TEST:57.361 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:10:08.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:10:08.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-t2hqj" for this suite.
Feb  3 13:10:14.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:10:14.729: INFO: namespace: e2e-tests-services-t2hqj, resource: bindings, ignored listing per whitelist
Feb  3 13:10:14.755: INFO: namespace e2e-tests-services-t2hqj deletion completed in 6.244504767s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.551 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  3 13:10:14.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  3 13:10:27.657: INFO: Successfully updated pod "annotationupdate8462080a-4686-11ea-ab15-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  3 13:10:29.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9rdtz" for this suite.
Feb  3 13:11:09.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:11:09.972: INFO: namespace: e2e-tests-downward-api-9rdtz, resource: bindings, ignored listing per whitelist
Feb  3 13:11:09.993: INFO: namespace e2e-tests-downward-api-9rdtz deletion completed in 40.255059313s

• [SLOW TEST:55.238 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSFeb  3 13:11:09.994: INFO: Running AfterSuite actions on all nodes
Feb  3 13:11:09.994: INFO: Running AfterSuite actions on node 1
Feb  3 13:11:09.994: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8632.419 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS