I0202 10:47:14.700649 9 e2e.go:224] Starting e2e run "5f00f0e4-45a9-11ea-8b99-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580640433 - Will randomize all specs Will run 201 of 2164 specs Feb 2 10:47:15.080: INFO: >>> kubeConfig: /root/.kube/config Feb 2 10:47:15.084: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 2 10:47:15.105: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 2 10:47:15.160: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 2 10:47:15.160: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 2 10:47:15.160: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 2 10:47:15.176: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 2 10:47:15.176: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 2 10:47:15.176: INFO: e2e test version: v1.13.12 Feb 2 10:47:15.183: INFO: kube-apiserver version: v1.13.8 SSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 10:47:15.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Feb 2 10:47:15.483: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 2 10:47:15.486: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 2 10:47:28.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-znm5v" for this suite. Feb 2 10:48:12.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 10:48:12.366: INFO: namespace: e2e-tests-pods-znm5v, resource: bindings, ignored listing per whitelist Feb 2 10:48:12.533: INFO: namespace e2e-tests-pods-znm5v deletion completed in 44.316026872s • [SLOW TEST:57.351 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 10:48:12.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 2 10:48:13.903: INFO: Pod name wrapped-volume-race-83011c60-45a9-11ea-8b99-0242ac110005: Found 0 pods out of 5 Feb 2 10:48:19.016: INFO: Pod name wrapped-volume-race-83011c60-45a9-11ea-8b99-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-83011c60-45a9-11ea-8b99-0242ac110005 in namespace e2e-tests-emptydir-wrapper-hk2m4, will wait for the garbage collector to delete the pods Feb 2 10:50:34.515: INFO: Deleting ReplicationController wrapped-volume-race-83011c60-45a9-11ea-8b99-0242ac110005 took: 792.329455ms Feb 2 10:50:35.016: INFO: Terminating ReplicationController wrapped-volume-race-83011c60-45a9-11ea-8b99-0242ac110005 pods took: 500.643021ms STEP: Creating RC which spawns configmap-volume pods Feb 2 10:51:23.007: INFO: Pod name wrapped-volume-race-f3bba1df-45a9-11ea-8b99-0242ac110005: Found 0 pods out of 5 Feb 2 10:51:28.041: INFO: Pod name wrapped-volume-race-f3bba1df-45a9-11ea-8b99-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f3bba1df-45a9-11ea-8b99-0242ac110005 in namespace e2e-tests-emptydir-wrapper-hk2m4, will wait for the garbage collector to delete the pods Feb 2 10:53:52.205: INFO: Deleting ReplicationController wrapped-volume-race-f3bba1df-45a9-11ea-8b99-0242ac110005 took: 43.968656ms Feb 2 10:53:52.806: INFO: Terminating ReplicationController wrapped-volume-race-f3bba1df-45a9-11ea-8b99-0242ac110005 pods took: 600.866089ms STEP: Creating RC which spawns configmap-volume pods Feb 2 10:54:43.039: INFO: Pod name wrapped-volume-race-6aefb2c4-45aa-11ea-8b99-0242ac110005: Found 0 pods out of 5 Feb 2 10:54:48.077: INFO: Pod name wrapped-volume-race-6aefb2c4-45aa-11ea-8b99-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6aefb2c4-45aa-11ea-8b99-0242ac110005 in namespace e2e-tests-emptydir-wrapper-hk2m4, will wait for the garbage collector to delete the pods Feb 2 10:56:42.275: INFO: Deleting ReplicationController wrapped-volume-race-6aefb2c4-45aa-11ea-8b99-0242ac110005 took: 26.21131ms Feb 2 10:56:42.676: INFO: Terminating ReplicationController wrapped-volume-race-6aefb2c4-45aa-11ea-8b99-0242ac110005 pods took: 400.725007ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 2 10:57:25.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-hk2m4" for this suite. Feb 2 10:57:36.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 10:57:36.031: INFO: namespace: e2e-tests-emptydir-wrapper-hk2m4, resource: bindings, ignored listing per whitelist Feb 2 10:57:36.178: INFO: namespace e2e-tests-emptydir-wrapper-hk2m4 deletion completed in 10.218059198s • [SLOW TEST:563.644 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 10:57:36.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 2 10:57:36.302: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d248195d-45aa-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-mkw22" to be "success or failure" Feb 2 10:57:36.378: INFO: Pod "downwardapi-volume-d248195d-45aa-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 75.792544ms Feb 2 10:57:38.392: INFO: Pod "downwardapi-volume-d248195d-45aa-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089842592s Feb 2 10:57:40.417: INFO: Pod "downwardapi-volume-d248195d-45aa-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114675609s Feb 2 10:57:42.435: INFO: Pod "downwardapi-volume-d248195d-45aa-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132402385s Feb 2 10:57:44.450: INFO: Pod "downwardapi-volume-d248195d-45aa-11ea-8b99-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.147428369s Feb 2 10:57:46.467: INFO: Pod "downwardapi-volume-d248195d-45aa-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.164872999s STEP: Saw pod success Feb 2 10:57:46.467: INFO: Pod "downwardapi-volume-d248195d-45aa-11ea-8b99-0242ac110005" satisfied condition "success or failure" Feb 2 10:57:47.013: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d248195d-45aa-11ea-8b99-0242ac110005 container client-container: STEP: delete the pod Feb 2 10:57:47.260: INFO: Waiting for pod downwardapi-volume-d248195d-45aa-11ea-8b99-0242ac110005 to disappear Feb 2 10:57:47.275: INFO: Pod downwardapi-volume-d248195d-45aa-11ea-8b99-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 2 10:57:47.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mkw22" for this suite. Feb 2 10:57:53.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 10:57:53.431: INFO: namespace: e2e-tests-downward-api-mkw22, resource: bindings, ignored listing per whitelist Feb 2 10:57:53.531: INFO: namespace e2e-tests-downward-api-mkw22 deletion completed in 6.245508635s • [SLOW TEST:17.353 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 10:57:53.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-htggc Feb 2 10:58:03.944: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-htggc STEP: checking the pod's current state and verifying that restartCount is present Feb 2 10:58:04.003: INFO: Initial restart count of pod liveness-exec is 0 Feb 2 10:58:55.467: INFO: Restart count of pod e2e-tests-container-probe-htggc/liveness-exec is now 1 (51.463898845s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 2 10:58:55.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-htggc" for this suite. Feb 2 10:59:01.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 10:59:02.220: INFO: namespace: e2e-tests-container-probe-htggc, resource: bindings, ignored listing per whitelist Feb 2 10:59:02.269: INFO: namespace e2e-tests-container-probe-htggc deletion completed in 6.465635791s • [SLOW TEST:68.737 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 10:59:02.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-05acac8c-45ab-11ea-8b99-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 2 10:59:02.573: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-05afc57e-45ab-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-dz6fx" to be "success or failure" Feb 2 10:59:02.684: INFO: Pod "pod-projected-configmaps-05afc57e-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 110.912113ms Feb 2 10:59:04.703: INFO: Pod "pod-projected-configmaps-05afc57e-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129680047s Feb 2 10:59:06.717: INFO: Pod "pod-projected-configmaps-05afc57e-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143233662s Feb 2 10:59:08.756: INFO: Pod "pod-projected-configmaps-05afc57e-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182298222s Feb 2 10:59:10.790: INFO: Pod "pod-projected-configmaps-05afc57e-45ab-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.216854964s STEP: Saw pod success Feb 2 10:59:10.791: INFO: Pod "pod-projected-configmaps-05afc57e-45ab-11ea-8b99-0242ac110005" satisfied condition "success or failure" Feb 2 10:59:10.801: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-05afc57e-45ab-11ea-8b99-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 2 10:59:11.029: INFO: Waiting for pod pod-projected-configmaps-05afc57e-45ab-11ea-8b99-0242ac110005 to disappear Feb 2 10:59:11.119: INFO: Pod pod-projected-configmaps-05afc57e-45ab-11ea-8b99-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 2 10:59:11.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dz6fx" for this suite. Feb 2 10:59:17.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 10:59:17.285: INFO: namespace: e2e-tests-projected-dz6fx, resource: bindings, ignored listing per whitelist Feb 2 10:59:17.368: INFO: namespace e2e-tests-projected-dz6fx deletion completed in 6.230398955s • [SLOW TEST:15.099 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 10:59:17.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0ea5ba47-45ab-11ea-8b99-0242ac110005 STEP: Creating a pod to test consume secrets Feb 2 10:59:17.585: INFO: Waiting up to 5m0s for pod "pod-secrets-0ea6af6c-45ab-11ea-8b99-0242ac110005" in namespace "e2e-tests-secrets-bdbq8" to be "success or failure" Feb 2 10:59:17.598: INFO: Pod "pod-secrets-0ea6af6c-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.130595ms Feb 2 10:59:19.613: INFO: Pod "pod-secrets-0ea6af6c-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027855569s Feb 2 10:59:21.625: INFO: Pod "pod-secrets-0ea6af6c-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040403517s Feb 2 10:59:23.960: INFO: Pod "pod-secrets-0ea6af6c-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.375571792s Feb 2 10:59:26.113: INFO: Pod "pod-secrets-0ea6af6c-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528572393s Feb 2 10:59:28.145: INFO: Pod "pod-secrets-0ea6af6c-45ab-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.560087853s STEP: Saw pod success Feb 2 10:59:28.145: INFO: Pod "pod-secrets-0ea6af6c-45ab-11ea-8b99-0242ac110005" satisfied condition "success or failure" Feb 2 10:59:28.150: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-0ea6af6c-45ab-11ea-8b99-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 2 10:59:28.223: INFO: Waiting for pod pod-secrets-0ea6af6c-45ab-11ea-8b99-0242ac110005 to disappear Feb 2 10:59:28.247: INFO: Pod pod-secrets-0ea6af6c-45ab-11ea-8b99-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 2 10:59:28.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bdbq8" for this suite. Feb 2 10:59:36.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 10:59:36.577: INFO: namespace: e2e-tests-secrets-bdbq8, resource: bindings, ignored listing per whitelist Feb 2 10:59:36.598: INFO: namespace e2e-tests-secrets-bdbq8 deletion completed in 8.301320593s • [SLOW TEST:19.229 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 10:59:36.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0202 11:00:19.641598 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 2 11:00:19.641: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 2 11:00:19.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-8h7dp" for this suite. Feb 2 11:00:30.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 11:00:30.861: INFO: namespace: e2e-tests-gc-8h7dp, resource: bindings, ignored listing per whitelist Feb 2 11:00:30.926: INFO: namespace e2e-tests-gc-8h7dp deletion completed in 11.277825373s • [SLOW TEST:54.328 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 11:00:30.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Feb 2 11:00:31.654: INFO: Waiting up to 5m0s for pod "client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005" in namespace "e2e-tests-containers-gf8sh" to be "success or failure" Feb 2 11:00:31.664: INFO: Pod "client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.583965ms Feb 2 11:00:33.931: INFO: Pod "client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.276707504s Feb 2 11:00:36.192: INFO: Pod "client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.537735873s Feb 2 11:00:38.203: INFO: Pod "client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.548837053s Feb 2 11:00:40.447: INFO: Pod "client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.793462255s Feb 2 11:00:42.465: INFO: Pod "client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.811422045s Feb 2 11:00:44.536: INFO: Pod "client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.882131452s Feb 2 11:00:46.561: INFO: Pod "client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.907484526s Feb 2 11:00:48.618: INFO: Pod "client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.963702151s STEP: Saw pod success Feb 2 11:00:48.618: INFO: Pod "client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005" satisfied condition "success or failure" Feb 2 11:00:48.639: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005 container test-container: STEP: delete the pod Feb 2 11:00:48.916: INFO: Waiting for pod client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005 to disappear Feb 2 11:00:48.927: INFO: Pod client-containers-3ab6dd1e-45ab-11ea-8b99-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 2 11:00:48.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-gf8sh" for this suite. Feb 2 11:00:55.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 11:00:55.239: INFO: namespace: e2e-tests-containers-gf8sh, resource: bindings, ignored listing per whitelist Feb 2 11:00:55.263: INFO: namespace e2e-tests-containers-gf8sh deletion completed in 6.327864436s • [SLOW TEST:24.336 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 11:00:55.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 2 11:01:06.186: INFO: Successfully updated pod "labelsupdate490399ff-45ab-11ea-8b99-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 2 11:01:08.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-h4k5b" for this suite. Feb 2 11:01:32.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 11:01:32.642: INFO: namespace: e2e-tests-downward-api-h4k5b, resource: bindings, ignored listing per whitelist Feb 2 11:01:32.741: INFO: namespace e2e-tests-downward-api-h4k5b deletion completed in 24.393948084s • [SLOW TEST:37.478 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 11:01:32.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 2 11:01:33.158: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fjv2x,SelfLink:/api/v1/namespaces/e2e-tests-watch-fjv2x/configmaps/e2e-watch-test-label-changed,UID:5f5b8ccd-45ab-11ea-a994-fa163e34d433,ResourceVersion:20293915,Generation:0,CreationTimestamp:2020-02-02 11:01:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 2 11:01:33.158: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fjv2x,SelfLink:/api/v1/namespaces/e2e-tests-watch-fjv2x/configmaps/e2e-watch-test-label-changed,UID:5f5b8ccd-45ab-11ea-a994-fa163e34d433,ResourceVersion:20293916,Generation:0,CreationTimestamp:2020-02-02 11:01:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 2 11:01:33.159: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fjv2x,SelfLink:/api/v1/namespaces/e2e-tests-watch-fjv2x/configmaps/e2e-watch-test-label-changed,UID:5f5b8ccd-45ab-11ea-a994-fa163e34d433,ResourceVersion:20293917,Generation:0,CreationTimestamp:2020-02-02 11:01:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 2 11:01:43.235: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fjv2x,SelfLink:/api/v1/namespaces/e2e-tests-watch-fjv2x/configmaps/e2e-watch-test-label-changed,UID:5f5b8ccd-45ab-11ea-a994-fa163e34d433,ResourceVersion:20293931,Generation:0,CreationTimestamp:2020-02-02 11:01:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 2 11:01:43.236: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fjv2x,SelfLink:/api/v1/namespaces/e2e-tests-watch-fjv2x/configmaps/e2e-watch-test-label-changed,UID:5f5b8ccd-45ab-11ea-a994-fa163e34d433,ResourceVersion:20293932,Generation:0,CreationTimestamp:2020-02-02 11:01:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 2 11:01:43.236: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fjv2x,SelfLink:/api/v1/namespaces/e2e-tests-watch-fjv2x/configmaps/e2e-watch-test-label-changed,UID:5f5b8ccd-45ab-11ea-a994-fa163e34d433,ResourceVersion:20293933,Generation:0,CreationTimestamp:2020-02-02 11:01:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 2 11:01:43.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-fjv2x" for this suite. Feb 2 11:01:51.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 11:01:51.354: INFO: namespace: e2e-tests-watch-fjv2x, resource: bindings, ignored listing per whitelist Feb 2 11:01:51.434: INFO: namespace e2e-tests-watch-fjv2x deletion completed in 8.189393341s • [SLOW TEST:18.692 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 11:01:51.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 2 11:01:51.618: INFO: Creating deployment "nginx-deployment" Feb 2 11:01:51.634: INFO: Waiting for observed generation 1 Feb 2 11:01:53.872: INFO: Waiting for all required pods to come up Feb 2 11:01:53.889: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 2 11:02:29.929: INFO: Waiting for deployment "nginx-deployment" to complete Feb 2 11:02:29.962: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 2 11:02:29.987: INFO: Updating deployment nginx-deployment Feb 2 11:02:29.987: INFO: Waiting for observed generation 2 Feb 2 11:02:32.647: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 2 11:02:32.720: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 2 11:02:33.452: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 2 11:02:33.540: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 2 11:02:33.541: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 2 11:02:33.882: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 2 11:02:33.897: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 2 11:02:33.897: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 2 11:02:34.582: INFO: Updating deployment nginx-deployment Feb 2 11:02:34.582: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 2 11:02:34.932: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 2 11:02:35.493: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 2 11:02:36.053: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2cmqb/deployments/nginx-deployment,UID:6a77e385-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294173,Generation:3,CreationTimestamp:2020-02-02 11:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-02 11:02:31 +0000 UTC 2020-02-02 11:01:51 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-02 11:02:34 +0000 UTC 2020-02-02 11:02:34 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 2 11:02:36.066: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2cmqb/replicasets/nginx-deployment-5c98f8fb5,UID:81590e3a-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294167,Generation:3,CreationTimestamp:2020-02-02 11:02:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6a77e385-45ab-11ea-a994-fa163e34d433 0xc001a0abd7 0xc001a0abd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 2 11:02:36.066: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 2 11:02:36.066: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2cmqb/replicasets/nginx-deployment-85ddf47c5d,UID:6a7c9c72-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294165,Generation:3,CreationTimestamp:2020-02-02 11:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6a77e385-45ab-11ea-a994-fa163e34d433 0xc001a0ac97 0xc001a0ac98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 2 11:02:37.026: INFO: Pod "nginx-deployment-5c98f8fb5-bw4cn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bw4cn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-5c98f8fb5-bw4cn,UID:81d7919e-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294164,Generation:0,CreationTimestamp:2020-02-02 11:02:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 81590e3a-45ab-11ea-a994-fa163e34d433 0xc001a0b647 0xc001a0b648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a0b6b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a0b6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:30 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-02 11:02:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.026: INFO: Pod "nginx-deployment-5c98f8fb5-d42l4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-d42l4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-5c98f8fb5-d42l4,UID:81749181-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294160,Generation:0,CreationTimestamp:2020-02-02 11:02:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 81590e3a-45ab-11ea-a994-fa163e34d433 0xc001a0b797 0xc001a0b798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a0b800} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a0b820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:30 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-02 11:02:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.027: INFO: Pod "nginx-deployment-5c98f8fb5-mqd6x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mqd6x,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-5c98f8fb5-mqd6x,UID:81755ae5-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294157,Generation:0,CreationTimestamp:2020-02-02 11:02:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 81590e3a-45ab-11ea-a994-fa163e34d433 0xc001a0b8e7 0xc001a0b8e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a0b950} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a0b970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:30 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-02 11:02:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.027: INFO: Pod "nginx-deployment-5c98f8fb5-qs492" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qs492,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-5c98f8fb5-qs492,UID:816c14b3-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294139,Generation:0,CreationTimestamp:2020-02-02 11:02:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 81590e3a-45ab-11ea-a994-fa163e34d433 0xc001a0ba37 0xc001a0ba38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a0baa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a0bac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:30 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-02 11:02:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.027: INFO: Pod "nginx-deployment-5c98f8fb5-rbwbn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rbwbn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-5c98f8fb5-rbwbn,UID:84f74456-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294181,Generation:0,CreationTimestamp:2020-02-02 11:02:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 81590e3a-45ab-11ea-a994-fa163e34d433 0xc001a0bb87 0xc001a0bb88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a0bbf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a0bc10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.028: INFO: Pod "nginx-deployment-5c98f8fb5-rqw27" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rqw27,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-5c98f8fb5-rqw27,UID:84f73c9c-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294182,Generation:0,CreationTimestamp:2020-02-02 11:02:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 81590e3a-45ab-11ea-a994-fa163e34d433 0xc001a0bc70 0xc001a0bc71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a0bce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a0bd00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.028: INFO: Pod "nginx-deployment-5c98f8fb5-rssw9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rssw9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-5c98f8fb5-rssw9,UID:81d4a564-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294162,Generation:0,CreationTimestamp:2020-02-02 11:02:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 81590e3a-45ab-11ea-a994-fa163e34d433 0xc001a0bd60 0xc001a0bd61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a0bdd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a0bdf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:30 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-02 11:02:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.029: INFO: Pod "nginx-deployment-5c98f8fb5-tgzbh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tgzbh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-5c98f8fb5-tgzbh,UID:84f518fe-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294179,Generation:0,CreationTimestamp:2020-02-02 11:02:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 81590e3a-45ab-11ea-a994-fa163e34d433 0xc001a0beb7 0xc001a0beb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a0bf20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a0bf40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.029: INFO: Pod "nginx-deployment-85ddf47c5d-7kp4r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7kp4r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-85ddf47c5d-7kp4r,UID:6aa1d250-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294063,Generation:0,CreationTimestamp:2020-02-02 11:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a7c9c72-45ab-11ea-a994-fa163e34d433 0xc001a0bfb7 0xc001a0bfb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001de8020} {node.kubernetes.io/unreachable Exists NoExecute 0xc001de8040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-02 11:01:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 11:02:14 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ceefe1737df1afb60662a0bd9f990753b20a61edaa08202aaa439767036b19ae}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.029: INFO: Pod "nginx-deployment-85ddf47c5d-7pqm6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7pqm6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-85ddf47c5d-7pqm6,UID:6aa1779c-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294085,Generation:0,CreationTimestamp:2020-02-02 11:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a7c9c72-45ab-11ea-a994-fa163e34d433 0xc001de8107 0xc001de8108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001de8170} {node.kubernetes.io/unreachable Exists NoExecute 0xc001de8190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-02 11:01:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 11:02:14 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://15ab8c6d7609c0b29e4a7932a223bb39214ab9e8f39d1b1e68267e301314383e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.030: INFO: Pod "nginx-deployment-85ddf47c5d-f48bf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f48bf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-85ddf47c5d-f48bf,UID:8578d423-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294183,Generation:0,CreationTimestamp:2020-02-02 11:02:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a7c9c72-45ab-11ea-a994-fa163e34d433 0xc001de8257 0xc001de8258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001de82c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001de82e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.035: INFO: Pod "nginx-deployment-85ddf47c5d-flrjr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-flrjr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-85ddf47c5d-flrjr,UID:6a8ee737-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294104,Generation:0,CreationTimestamp:2020-02-02 11:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a7c9c72-45ab-11ea-a994-fa163e34d433 0xc001de8340 0xc001de8341}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001de83a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001de83c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:51 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-02 11:01:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 11:02:21 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://27cf8b960fb8e0a894a9ca928af73f10c6f9ae1fab9658344cdb454e8272f262}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.037: INFO: Pod "nginx-deployment-85ddf47c5d-g9sx8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g9sx8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-85ddf47c5d-g9sx8,UID:6ab075cc-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294108,Generation:0,CreationTimestamp:2020-02-02 11:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a7c9c72-45ab-11ea-a994-fa163e34d433 0xc001de8487 0xc001de8488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001de84f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001de8510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-02 11:01:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 11:02:22 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a41dc50fe981cf76546d0de9f02275f8f201a395895d38db791e4afff2a299c1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.038: INFO: Pod "nginx-deployment-85ddf47c5d-gklgg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gklgg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-85ddf47c5d-gklgg,UID:6aa1c5a9-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294082,Generation:0,CreationTimestamp:2020-02-02 11:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a7c9c72-45ab-11ea-a994-fa163e34d433 0xc001de85d7 0xc001de85d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001de8640} {node.kubernetes.io/unreachable Exists NoExecute 0xc001de8660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-02 11:01:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 11:02:22 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b5dfb3b8298f427f57b548175a375986a4c1c13214b461138fcd571504fe45a4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.038: INFO: Pod "nginx-deployment-85ddf47c5d-jtvgt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jtvgt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-85ddf47c5d-jtvgt,UID:6a97c10b-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294100,Generation:0,CreationTimestamp:2020-02-02 11:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a7c9c72-45ab-11ea-a994-fa163e34d433 0xc001de8727 0xc001de8728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001de8790} {node.kubernetes.io/unreachable Exists NoExecute 0xc001de87b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:51 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-02 11:01:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 11:02:22 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a1c96ff786c9033f442963bd7b04918d6824644a9b237c650b6f5bc93ba0e1fd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.039: INFO: Pod "nginx-deployment-85ddf47c5d-nlmrp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nlmrp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-85ddf47c5d-nlmrp,UID:6aa1ad7e-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294079,Generation:0,CreationTimestamp:2020-02-02 11:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a7c9c72-45ab-11ea-a994-fa163e34d433 0xc001de8877 0xc001de8878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001de88e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001de8900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-02 11:01:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 11:02:19 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b145357f0917130658dd3afe54f18265147eac778350eefa211dea5d6f1672dd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.040: INFO: Pod "nginx-deployment-85ddf47c5d-v9tt7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v9tt7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-85ddf47c5d-v9tt7,UID:6a94fbf7-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294096,Generation:0,CreationTimestamp:2020-02-02 11:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a7c9c72-45ab-11ea-a994-fa163e34d433 0xc001de89c7 0xc001de89c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001de8a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001de8a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:02:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:01:51 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-02 11:01:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 11:02:17 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c39f642d0b7f14253cbfce8cbc8439355c9c992dc5f3c9f744cf8de7f91f2041}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 2 11:02:37.040: INFO: Pod "nginx-deployment-85ddf47c5d-wlqr2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wlqr2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2cmqb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2cmqb/pods/nginx-deployment-85ddf47c5d-wlqr2,UID:84f5169d-45ab-11ea-a994-fa163e34d433,ResourceVersion:20294177,Generation:0,CreationTimestamp:2020-02-02 11:02:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6a7c9c72-45ab-11ea-a994-fa163e34d433 0xc001de8b17 0xc001de8b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcdjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcdjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xcdjk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001de8b80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001de8ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 2 11:02:37.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-2cmqb" for this suite. Feb 2 11:03:09.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 11:03:09.779: INFO: namespace: e2e-tests-deployment-2cmqb, resource: bindings, ignored listing per whitelist Feb 2 11:03:09.865: INFO: namespace e2e-tests-deployment-2cmqb deletion completed in 31.671645508s • [SLOW TEST:78.430 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 11:03:09.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 2 11:03:10.461: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Feb 2 11:03:10.472: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9wstm/daemonsets","resourceVersion":"20294592"},"items":null} Feb 2 11:03:10.478: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9wstm/pods","resourceVersion":"20294592"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 2 11:03:10.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-9wstm" for this suite. Feb 2 11:03:21.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 11:03:24.503: INFO: namespace: e2e-tests-daemonsets-9wstm, resource: bindings, ignored listing per whitelist Feb 2 11:03:24.534: INFO: namespace e2e-tests-daemonsets-9wstm deletion completed in 14.041624636s S [SKIPPING] [14.669 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 2 11:03:10.461: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 11:03:24.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-a33fd6a1-45ab-11ea-8b99-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 2 11:03:26.959: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-ggqhh" to be "success or failure" Feb 2 11:03:27.820: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 860.664888ms Feb 2 11:03:29.830: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.871178722s Feb 2 11:03:31.888: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.928892561s Feb 2 11:03:33.947: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.987425933s Feb 2 11:03:35.961: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.001705979s Feb 2 11:03:38.662: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.702738449s Feb 2 11:03:40.676: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.717230052s Feb 2 11:03:42.685: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.725850447s Feb 2 11:03:44.707: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.747868503s Feb 2 11:03:48.137: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.177494391s Feb 2 11:03:50.145: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.185550409s Feb 2 11:03:52.195: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.236194025s Feb 2 11:03:54.207: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.247842219s Feb 2 11:03:56.226: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.266855033s STEP: Saw pod success Feb 2 11:03:56.226: INFO: Pod "pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005" satisfied condition "success or failure" Feb 2 11:03:56.234: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 2 11:03:56.406: INFO: Waiting for pod pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005 to disappear Feb 2 11:03:56.418: INFO: Pod pod-projected-configmaps-a3421c30-45ab-11ea-8b99-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 2 11:03:56.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ggqhh" for this suite. Feb 2 11:04:02.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 11:04:02.653: INFO: namespace: e2e-tests-projected-ggqhh, resource: bindings, ignored listing per whitelist Feb 2 11:04:02.678: INFO: namespace e2e-tests-projected-ggqhh deletion completed in 6.250713125s • [SLOW TEST:38.143 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 2 11:04:02.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 2 11:04:03.000: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 16.706746ms)
Feb  2 11:04:03.006: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.760056ms)
Feb  2 11:04:03.012: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.420453ms)
Feb  2 11:04:03.018: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.920722ms)
Feb  2 11:04:03.024: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.114372ms)
Feb  2 11:04:03.030: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.878625ms)
Feb  2 11:04:03.037: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.739822ms)
Feb  2 11:04:03.042: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.206751ms)
Feb  2 11:04:03.048: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.661669ms)
Feb  2 11:04:03.055: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.324646ms)
Feb  2 11:04:03.061: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.210257ms)
Feb  2 11:04:03.069: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.955188ms)
Feb  2 11:04:03.204: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 134.91716ms)
Feb  2 11:04:03.244: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 40.023488ms)
Feb  2 11:04:03.273: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 29.260794ms)
Feb  2 11:04:03.291: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.718907ms)
Feb  2 11:04:03.302: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.16246ms)
Feb  2 11:04:03.311: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.56659ms)
Feb  2 11:04:03.318: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.296301ms)
Feb  2 11:04:03.330: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.480392ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:04:03.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-6h6g2" for this suite.
Feb  2 11:04:09.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:04:09.433: INFO: namespace: e2e-tests-proxy-6h6g2, resource: bindings, ignored listing per whitelist
Feb  2 11:04:09.521: INFO: namespace e2e-tests-proxy-6h6g2 deletion completed in 6.183316125s

• [SLOW TEST:6.843 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:04:09.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-2bfh
STEP: Creating a pod to test atomic-volume-subpath
Feb  2 11:04:09.866: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2bfh" in namespace "e2e-tests-subpath-4f4n2" to be "success or failure"
Feb  2 11:04:09.886: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Pending", Reason="", readiness=false. Elapsed: 19.714775ms
Feb  2 11:04:11.905: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038355079s
Feb  2 11:04:13.972: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105551611s
Feb  2 11:04:16.000: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133283409s
Feb  2 11:04:18.042: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175963612s
Feb  2 11:04:20.065: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.198540383s
Feb  2 11:04:22.079: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.212646305s
Feb  2 11:04:25.733: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Pending", Reason="", readiness=false. Elapsed: 15.866256462s
Feb  2 11:04:27.745: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Running", Reason="", readiness=false. Elapsed: 17.878897962s
Feb  2 11:04:29.755: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Running", Reason="", readiness=false. Elapsed: 19.889090114s
Feb  2 11:04:31.784: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Running", Reason="", readiness=false. Elapsed: 21.918036537s
Feb  2 11:04:33.802: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Running", Reason="", readiness=false. Elapsed: 23.936043258s
Feb  2 11:04:35.816: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Running", Reason="", readiness=false. Elapsed: 25.949723667s
Feb  2 11:04:37.841: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Running", Reason="", readiness=false. Elapsed: 27.974789884s
Feb  2 11:04:39.859: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Running", Reason="", readiness=false. Elapsed: 29.992838683s
Feb  2 11:04:41.957: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Running", Reason="", readiness=false. Elapsed: 32.090941372s
Feb  2 11:04:43.967: INFO: Pod "pod-subpath-test-configmap-2bfh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.10060248s
STEP: Saw pod success
Feb  2 11:04:43.967: INFO: Pod "pod-subpath-test-configmap-2bfh" satisfied condition "success or failure"
Feb  2 11:04:43.970: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-2bfh container test-container-subpath-configmap-2bfh: 
STEP: delete the pod
Feb  2 11:04:44.239: INFO: Waiting for pod pod-subpath-test-configmap-2bfh to disappear
Feb  2 11:04:44.487: INFO: Pod pod-subpath-test-configmap-2bfh no longer exists
STEP: Deleting pod pod-subpath-test-configmap-2bfh
Feb  2 11:04:44.487: INFO: Deleting pod "pod-subpath-test-configmap-2bfh" in namespace "e2e-tests-subpath-4f4n2"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:04:44.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-4f4n2" for this suite.
Feb  2 11:04:52.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:04:52.874: INFO: namespace: e2e-tests-subpath-4f4n2, resource: bindings, ignored listing per whitelist
Feb  2 11:04:52.987: INFO: namespace e2e-tests-subpath-4f4n2 deletion completed in 8.426395298s

• [SLOW TEST:43.465 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:04:52.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  2 11:04:53.488: INFO: Waiting up to 5m0s for pod "downward-api-d6dc05ed-45ab-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-w74wj" to be "success or failure"
Feb  2 11:04:53.502: INFO: Pod "downward-api-d6dc05ed-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.722007ms
Feb  2 11:04:55.714: INFO: Pod "downward-api-d6dc05ed-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226058184s
Feb  2 11:04:57.739: INFO: Pod "downward-api-d6dc05ed-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.251755012s
Feb  2 11:05:00.143: INFO: Pod "downward-api-d6dc05ed-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.655575766s
Feb  2 11:05:02.202: INFO: Pod "downward-api-d6dc05ed-45ab-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.714806067s
Feb  2 11:05:04.215: INFO: Pod "downward-api-d6dc05ed-45ab-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.727534899s
STEP: Saw pod success
Feb  2 11:05:04.215: INFO: Pod "downward-api-d6dc05ed-45ab-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:05:04.220: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-d6dc05ed-45ab-11ea-8b99-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  2 11:05:04.511: INFO: Waiting for pod downward-api-d6dc05ed-45ab-11ea-8b99-0242ac110005 to disappear
Feb  2 11:05:04.722: INFO: Pod downward-api-d6dc05ed-45ab-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:05:04.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-w74wj" for this suite.
Feb  2 11:05:10.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:05:10.904: INFO: namespace: e2e-tests-downward-api-w74wj, resource: bindings, ignored listing per whitelist
Feb  2 11:05:11.356: INFO: namespace e2e-tests-downward-api-w74wj deletion completed in 6.609474084s

• [SLOW TEST:18.369 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:05:11.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-lpbq
STEP: Creating a pod to test atomic-volume-subpath
Feb  2 11:05:11.610: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lpbq" in namespace "e2e-tests-subpath-dq7cm" to be "success or failure"
Feb  2 11:05:11.694: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Pending", Reason="", readiness=false. Elapsed: 84.427474ms
Feb  2 11:05:13.727: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117387947s
Feb  2 11:05:15.742: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131759033s
Feb  2 11:05:17.755: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145069785s
Feb  2 11:05:19.771: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.161709378s
Feb  2 11:05:22.120: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.509969903s
Feb  2 11:05:24.249: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Running", Reason="", readiness=true. Elapsed: 12.638812211s
Feb  2 11:05:26.269: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Running", Reason="", readiness=false. Elapsed: 14.658886131s
Feb  2 11:05:28.286: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Running", Reason="", readiness=false. Elapsed: 16.676387556s
Feb  2 11:05:30.315: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Running", Reason="", readiness=false. Elapsed: 18.704906721s
Feb  2 11:05:32.358: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Running", Reason="", readiness=false. Elapsed: 20.748001688s
Feb  2 11:05:34.385: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Running", Reason="", readiness=false. Elapsed: 22.77514224s
Feb  2 11:05:36.408: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Running", Reason="", readiness=false. Elapsed: 24.798001763s
Feb  2 11:05:38.426: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Running", Reason="", readiness=false. Elapsed: 26.81652063s
Feb  2 11:05:40.449: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Running", Reason="", readiness=false. Elapsed: 28.839338594s
Feb  2 11:05:42.516: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Running", Reason="", readiness=false. Elapsed: 30.905766753s
Feb  2 11:05:44.565: INFO: Pod "pod-subpath-test-downwardapi-lpbq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.955000519s
STEP: Saw pod success
Feb  2 11:05:44.565: INFO: Pod "pod-subpath-test-downwardapi-lpbq" satisfied condition "success or failure"
Feb  2 11:05:44.587: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-lpbq container test-container-subpath-downwardapi-lpbq: 
STEP: delete the pod
Feb  2 11:05:44.827: INFO: Waiting for pod pod-subpath-test-downwardapi-lpbq to disappear
Feb  2 11:05:44.840: INFO: Pod pod-subpath-test-downwardapi-lpbq no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-lpbq
Feb  2 11:05:44.840: INFO: Deleting pod "pod-subpath-test-downwardapi-lpbq" in namespace "e2e-tests-subpath-dq7cm"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:05:44.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-dq7cm" for this suite.
Feb  2 11:05:52.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:05:53.018: INFO: namespace: e2e-tests-subpath-dq7cm, resource: bindings, ignored listing per whitelist
Feb  2 11:05:53.077: INFO: namespace e2e-tests-subpath-dq7cm deletion completed in 8.222139513s

• [SLOW TEST:41.721 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:05:53.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  2 11:05:53.201: INFO: PodSpec: initContainers in spec.initContainers
Feb  2 11:07:04.145: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fa76781d-45ab-11ea-8b99-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-p9ktr", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-p9ktr/pods/pod-init-fa76781d-45ab-11ea-8b99-0242ac110005", UID:"fa80530b-45ab-11ea-a994-fa163e34d433", ResourceVersion:"20295043", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716238353, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"201367399"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5f5wf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001fb6bc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5f5wf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5f5wf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5f5wf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020824a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021180c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002082520)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002082540)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002082548), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00208254c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716238353, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716238353, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716238353, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716238353, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001e321a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00017e8c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00017e9a0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://23135f1e43fcd36a19acebbd5a5aedb8fbc379d1d8fd2d43b06680f3fc955f40"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001e321e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001e321c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:07:04.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-p9ktr" for this suite.
Feb  2 11:07:28.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:07:28.607: INFO: namespace: e2e-tests-init-container-p9ktr, resource: bindings, ignored listing per whitelist
Feb  2 11:07:28.618: INFO: namespace e2e-tests-init-container-p9ktr deletion completed in 24.298188736s

• [SLOW TEST:95.540 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:07:28.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-337dd4e7-45ac-11ea-8b99-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-337dd557-45ac-11ea-8b99-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-337dd4e7-45ac-11ea-8b99-0242ac110005
STEP: Updating configmap cm-test-opt-upd-337dd557-45ac-11ea-8b99-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-337dd577-45ac-11ea-8b99-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:08:55.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7flp9" for this suite.
Feb  2 11:09:21.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:09:21.653: INFO: namespace: e2e-tests-projected-7flp9, resource: bindings, ignored listing per whitelist
Feb  2 11:09:21.699: INFO: namespace e2e-tests-projected-7flp9 deletion completed in 26.223593928s

• [SLOW TEST:113.081 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:09:21.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 11:09:22.276: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"76ea8b74-45ac-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001b3c4ca), BlockOwnerDeletion:(*bool)(0xc001b3c4cb)}}
Feb  2 11:09:22.322: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"76d81c22-45ac-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0019a877a), BlockOwnerDeletion:(*bool)(0xc0019a877b)}}
Feb  2 11:09:22.434: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"76dabc3f-45ac-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001b3ce62), BlockOwnerDeletion:(*bool)(0xc001b3ce63)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:09:27.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-ldg74" for this suite.
Feb  2 11:09:33.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:09:33.717: INFO: namespace: e2e-tests-gc-ldg74, resource: bindings, ignored listing per whitelist
Feb  2 11:09:33.836: INFO: namespace e2e-tests-gc-ldg74 deletion completed in 6.354298117s

• [SLOW TEST:12.137 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:09:33.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-7e24b4a2-45ac-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 11:09:34.260: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7e337c27-45ac-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-zr7vn" to be "success or failure"
Feb  2 11:09:34.267: INFO: Pod "pod-projected-secrets-7e337c27-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.638221ms
Feb  2 11:09:36.277: INFO: Pod "pod-projected-secrets-7e337c27-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016618457s
Feb  2 11:09:38.828: INFO: Pod "pod-projected-secrets-7e337c27-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.567641154s
Feb  2 11:09:40.836: INFO: Pod "pod-projected-secrets-7e337c27-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576181571s
Feb  2 11:09:42.849: INFO: Pod "pod-projected-secrets-7e337c27-45ac-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.588646869s
STEP: Saw pod success
Feb  2 11:09:42.849: INFO: Pod "pod-projected-secrets-7e337c27-45ac-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:09:42.858: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-7e337c27-45ac-11ea-8b99-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  2 11:09:43.242: INFO: Waiting for pod pod-projected-secrets-7e337c27-45ac-11ea-8b99-0242ac110005 to disappear
Feb  2 11:09:43.668: INFO: Pod pod-projected-secrets-7e337c27-45ac-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:09:43.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zr7vn" for this suite.
Feb  2 11:09:49.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:09:49.785: INFO: namespace: e2e-tests-projected-zr7vn, resource: bindings, ignored listing per whitelist
Feb  2 11:09:49.992: INFO: namespace e2e-tests-projected-zr7vn deletion completed in 6.312285527s

• [SLOW TEST:16.155 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:09:49.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-87b2c668-45ac-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 11:09:50.176: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-87b3ee94-45ac-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-zb2gt" to be "success or failure"
Feb  2 11:09:50.182: INFO: Pod "pod-projected-configmaps-87b3ee94-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.417421ms
Feb  2 11:09:52.206: INFO: Pod "pod-projected-configmaps-87b3ee94-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029594371s
Feb  2 11:09:54.219: INFO: Pod "pod-projected-configmaps-87b3ee94-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042099066s
Feb  2 11:09:56.241: INFO: Pod "pod-projected-configmaps-87b3ee94-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064642643s
Feb  2 11:09:58.276: INFO: Pod "pod-projected-configmaps-87b3ee94-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09972426s
Feb  2 11:10:00.330: INFO: Pod "pod-projected-configmaps-87b3ee94-45ac-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.154053528s
STEP: Saw pod success
Feb  2 11:10:00.331: INFO: Pod "pod-projected-configmaps-87b3ee94-45ac-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:10:00.348: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-87b3ee94-45ac-11ea-8b99-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  2 11:10:00.500: INFO: Waiting for pod pod-projected-configmaps-87b3ee94-45ac-11ea-8b99-0242ac110005 to disappear
Feb  2 11:10:00.517: INFO: Pod pod-projected-configmaps-87b3ee94-45ac-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:10:00.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zb2gt" for this suite.
Feb  2 11:10:06.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:10:06.754: INFO: namespace: e2e-tests-projected-zb2gt, resource: bindings, ignored listing per whitelist
Feb  2 11:10:06.889: INFO: namespace e2e-tests-projected-zb2gt deletion completed in 6.286010568s

• [SLOW TEST:16.897 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:10:06.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-91d1ea39-45ac-11ea-8b99-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-91d1ea39-45ac-11ea-8b99-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:11:21.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4fcg2" for this suite.
Feb  2 11:11:45.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:11:45.471: INFO: namespace: e2e-tests-projected-4fcg2, resource: bindings, ignored listing per whitelist
Feb  2 11:11:45.571: INFO: namespace e2e-tests-projected-4fcg2 deletion completed in 24.266087308s

• [SLOW TEST:98.682 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:11:45.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-cc99596d-45ac-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 11:11:45.845: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cc9b3fc8-45ac-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-54t4v" to be "success or failure"
Feb  2 11:11:45.874: INFO: Pod "pod-projected-secrets-cc9b3fc8-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.773524ms
Feb  2 11:11:47.921: INFO: Pod "pod-projected-secrets-cc9b3fc8-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075594959s
Feb  2 11:11:49.944: INFO: Pod "pod-projected-secrets-cc9b3fc8-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097878164s
Feb  2 11:11:52.209: INFO: Pod "pod-projected-secrets-cc9b3fc8-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.363639019s
Feb  2 11:11:54.223: INFO: Pod "pod-projected-secrets-cc9b3fc8-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.377645741s
Feb  2 11:11:56.248: INFO: Pod "pod-projected-secrets-cc9b3fc8-45ac-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.402562266s
STEP: Saw pod success
Feb  2 11:11:56.248: INFO: Pod "pod-projected-secrets-cc9b3fc8-45ac-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:11:56.259: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-cc9b3fc8-45ac-11ea-8b99-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  2 11:11:56.892: INFO: Waiting for pod pod-projected-secrets-cc9b3fc8-45ac-11ea-8b99-0242ac110005 to disappear
Feb  2 11:11:56.974: INFO: Pod pod-projected-secrets-cc9b3fc8-45ac-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:11:56.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-54t4v" for this suite.
Feb  2 11:12:03.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:12:03.203: INFO: namespace: e2e-tests-projected-54t4v, resource: bindings, ignored listing per whitelist
Feb  2 11:12:03.257: INFO: namespace e2e-tests-projected-54t4v deletion completed in 6.274751524s

• [SLOW TEST:17.685 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:12:03.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 11:12:03.406: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d71e0038-45ac-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-2c2bg" to be "success or failure"
Feb  2 11:12:03.497: INFO: Pod "downwardapi-volume-d71e0038-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 90.219262ms
Feb  2 11:12:05.511: INFO: Pod "downwardapi-volume-d71e0038-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104269196s
Feb  2 11:12:07.547: INFO: Pod "downwardapi-volume-d71e0038-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140465131s
Feb  2 11:12:09.641: INFO: Pod "downwardapi-volume-d71e0038-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.234391948s
Feb  2 11:12:11.703: INFO: Pod "downwardapi-volume-d71e0038-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.296457528s
Feb  2 11:12:13.729: INFO: Pod "downwardapi-volume-d71e0038-45ac-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.322399547s
STEP: Saw pod success
Feb  2 11:12:13.729: INFO: Pod "downwardapi-volume-d71e0038-45ac-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:12:13.738: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d71e0038-45ac-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 11:12:13.999: INFO: Waiting for pod downwardapi-volume-d71e0038-45ac-11ea-8b99-0242ac110005 to disappear
Feb  2 11:12:14.019: INFO: Pod downwardapi-volume-d71e0038-45ac-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:12:14.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2c2bg" for this suite.
Feb  2 11:12:20.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:12:20.291: INFO: namespace: e2e-tests-projected-2c2bg, resource: bindings, ignored listing per whitelist
Feb  2 11:12:20.454: INFO: namespace e2e-tests-projected-2c2bg deletion completed in 6.417385847s

• [SLOW TEST:17.197 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:12:20.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  2 11:12:29.564: INFO: Successfully updated pod "labelsupdatee17a5bdf-45ac-11ea-8b99-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:12:31.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kvchc" for this suite.
Feb  2 11:12:55.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:12:55.858: INFO: namespace: e2e-tests-projected-kvchc, resource: bindings, ignored listing per whitelist
Feb  2 11:12:55.992: INFO: namespace e2e-tests-projected-kvchc deletion completed in 24.22097494s

• [SLOW TEST:35.537 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:12:55.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb  2 11:12:56.194: INFO: Waiting up to 5m0s for pod "var-expansion-f6941eee-45ac-11ea-8b99-0242ac110005" in namespace "e2e-tests-var-expansion-j2dqx" to be "success or failure"
Feb  2 11:12:56.208: INFO: Pod "var-expansion-f6941eee-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.852268ms
Feb  2 11:12:58.221: INFO: Pod "var-expansion-f6941eee-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026781507s
Feb  2 11:13:00.241: INFO: Pod "var-expansion-f6941eee-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04757375s
Feb  2 11:13:02.301: INFO: Pod "var-expansion-f6941eee-45ac-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107398124s
Feb  2 11:13:04.421: INFO: Pod "var-expansion-f6941eee-45ac-11ea-8b99-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.227315533s
Feb  2 11:13:06.698: INFO: Pod "var-expansion-f6941eee-45ac-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.503830529s
STEP: Saw pod success
Feb  2 11:13:06.698: INFO: Pod "var-expansion-f6941eee-45ac-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:13:06.704: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-f6941eee-45ac-11ea-8b99-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  2 11:13:07.389: INFO: Waiting for pod var-expansion-f6941eee-45ac-11ea-8b99-0242ac110005 to disappear
Feb  2 11:13:07.409: INFO: Pod var-expansion-f6941eee-45ac-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:13:07.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-j2dqx" for this suite.
Feb  2 11:13:13.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:13:13.722: INFO: namespace: e2e-tests-var-expansion-j2dqx, resource: bindings, ignored listing per whitelist
Feb  2 11:13:13.728: INFO: namespace e2e-tests-var-expansion-j2dqx deletion completed in 6.296570293s

• [SLOW TEST:17.735 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:13:13.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 11:13:14.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-sctq8'
Feb  2 11:13:15.981: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  2 11:13:15.981: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Feb  2 11:13:16.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-sctq8'
Feb  2 11:13:16.252: INFO: stderr: ""
Feb  2 11:13:16.252: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:13:16.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sctq8" for this suite.
Feb  2 11:13:24.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:13:24.652: INFO: namespace: e2e-tests-kubectl-sctq8, resource: bindings, ignored listing per whitelist
Feb  2 11:13:24.660: INFO: namespace e2e-tests-kubectl-sctq8 deletion completed in 8.271589364s

• [SLOW TEST:10.932 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:13:24.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-snvhz
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  2 11:13:24.868: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  2 11:14:01.228: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-snvhz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 11:14:01.228: INFO: >>> kubeConfig: /root/.kube/config
I0202 11:14:01.308990       9 log.go:172] (0xc002256000) (0xc001269860) Create stream
I0202 11:14:01.309051       9 log.go:172] (0xc002256000) (0xc001269860) Stream added, broadcasting: 1
I0202 11:14:01.328622       9 log.go:172] (0xc002256000) Reply frame received for 1
I0202 11:14:01.328719       9 log.go:172] (0xc002256000) (0xc001fbc000) Create stream
I0202 11:14:01.328739       9 log.go:172] (0xc002256000) (0xc001fbc000) Stream added, broadcasting: 3
I0202 11:14:01.331132       9 log.go:172] (0xc002256000) Reply frame received for 3
I0202 11:14:01.331193       9 log.go:172] (0xc002256000) (0xc001c8a000) Create stream
I0202 11:14:01.331212       9 log.go:172] (0xc002256000) (0xc001c8a000) Stream added, broadcasting: 5
I0202 11:14:01.334790       9 log.go:172] (0xc002256000) Reply frame received for 5
I0202 11:14:02.566248       9 log.go:172] (0xc002256000) Data frame received for 3
I0202 11:14:02.566431       9 log.go:172] (0xc001fbc000) (3) Data frame handling
I0202 11:14:02.566461       9 log.go:172] (0xc001fbc000) (3) Data frame sent
I0202 11:14:02.760128       9 log.go:172] (0xc002256000) Data frame received for 1
I0202 11:14:02.760537       9 log.go:172] (0xc002256000) (0xc001fbc000) Stream removed, broadcasting: 3
I0202 11:14:02.760677       9 log.go:172] (0xc001269860) (1) Data frame handling
I0202 11:14:02.760709       9 log.go:172] (0xc001269860) (1) Data frame sent
I0202 11:14:02.760772       9 log.go:172] (0xc002256000) (0xc001c8a000) Stream removed, broadcasting: 5
I0202 11:14:02.760840       9 log.go:172] (0xc002256000) (0xc001269860) Stream removed, broadcasting: 1
I0202 11:14:02.760869       9 log.go:172] (0xc002256000) Go away received
I0202 11:14:02.761210       9 log.go:172] (0xc002256000) (0xc001269860) Stream removed, broadcasting: 1
I0202 11:14:02.761237       9 log.go:172] (0xc002256000) (0xc001fbc000) Stream removed, broadcasting: 3
I0202 11:14:02.761251       9 log.go:172] (0xc002256000) (0xc001c8a000) Stream removed, broadcasting: 5
Feb  2 11:14:02.761: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:14:02.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-snvhz" for this suite.
Feb  2 11:14:26.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:14:27.022: INFO: namespace: e2e-tests-pod-network-test-snvhz, resource: bindings, ignored listing per whitelist
Feb  2 11:14:27.092: INFO: namespace e2e-tests-pod-network-test-snvhz deletion completed in 24.309291244s

• [SLOW TEST:62.431 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:14:27.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-5j5r8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-5j5r8 to expose endpoints map[]
Feb  2 11:14:27.320: INFO: Get endpoints failed (12.541072ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  2 11:14:28.337: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-5j5r8 exposes endpoints map[] (1.029676614s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-5j5r8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-5j5r8 to expose endpoints map[pod1:[100]]
Feb  2 11:14:33.291: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.933955832s elapsed, will retry)
Feb  2 11:14:37.457: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-5j5r8 exposes endpoints map[pod1:[100]] (9.099974042s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-5j5r8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-5j5r8 to expose endpoints map[pod1:[100] pod2:[101]]
Feb  2 11:14:41.824: INFO: Unexpected endpoints: found map[2d836718-45ad-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.35880446s elapsed, will retry)
Feb  2 11:14:46.062: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-5j5r8 exposes endpoints map[pod1:[100] pod2:[101]] (8.597259341s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-5j5r8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-5j5r8 to expose endpoints map[pod2:[101]]
Feb  2 11:14:46.288: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-5j5r8 exposes endpoints map[pod2:[101]] (213.272049ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-5j5r8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-5j5r8 to expose endpoints map[]
Feb  2 11:14:47.428: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-5j5r8 exposes endpoints map[] (1.125451236s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:14:47.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-5j5r8" for this suite.
Feb  2 11:15:11.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:15:11.971: INFO: namespace: e2e-tests-services-5j5r8, resource: bindings, ignored listing per whitelist
Feb  2 11:15:11.979: INFO: namespace e2e-tests-services-5j5r8 deletion completed in 24.383235954s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:44.886 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:15:11.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Feb  2 11:15:12.384: INFO: Waiting up to 5m0s for pod "client-containers-47a7a345-45ad-11ea-8b99-0242ac110005" in namespace "e2e-tests-containers-zr9ft" to be "success or failure"
Feb  2 11:15:12.426: INFO: Pod "client-containers-47a7a345-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.121029ms
Feb  2 11:15:14.472: INFO: Pod "client-containers-47a7a345-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087429685s
Feb  2 11:15:16.549: INFO: Pod "client-containers-47a7a345-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164932143s
Feb  2 11:15:18.591: INFO: Pod "client-containers-47a7a345-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207000031s
Feb  2 11:15:20.639: INFO: Pod "client-containers-47a7a345-45ad-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.254589303s
STEP: Saw pod success
Feb  2 11:15:20.639: INFO: Pod "client-containers-47a7a345-45ad-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:15:20.686: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-47a7a345-45ad-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 11:15:20.828: INFO: Waiting for pod client-containers-47a7a345-45ad-11ea-8b99-0242ac110005 to disappear
Feb  2 11:15:20.839: INFO: Pod client-containers-47a7a345-45ad-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:15:20.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-zr9ft" for this suite.
Feb  2 11:15:27.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:15:27.122: INFO: namespace: e2e-tests-containers-zr9ft, resource: bindings, ignored listing per whitelist
Feb  2 11:15:27.124: INFO: namespace e2e-tests-containers-zr9ft deletion completed in 6.2785052s

• [SLOW TEST:15.145 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:15:27.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 11:15:27.332: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50a9df01-45ad-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-qzlcq" to be "success or failure"
Feb  2 11:15:27.336: INFO: Pod "downwardapi-volume-50a9df01-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.748444ms
Feb  2 11:15:29.348: INFO: Pod "downwardapi-volume-50a9df01-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015964777s
Feb  2 11:15:31.369: INFO: Pod "downwardapi-volume-50a9df01-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03660706s
Feb  2 11:15:33.427: INFO: Pod "downwardapi-volume-50a9df01-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094965002s
Feb  2 11:15:35.647: INFO: Pod "downwardapi-volume-50a9df01-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314488155s
Feb  2 11:15:37.688: INFO: Pod "downwardapi-volume-50a9df01-45ad-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.355090213s
STEP: Saw pod success
Feb  2 11:15:37.688: INFO: Pod "downwardapi-volume-50a9df01-45ad-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:15:37.693: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-50a9df01-45ad-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 11:15:37.883: INFO: Waiting for pod downwardapi-volume-50a9df01-45ad-11ea-8b99-0242ac110005 to disappear
Feb  2 11:15:37.897: INFO: Pod downwardapi-volume-50a9df01-45ad-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:15:37.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qzlcq" for this suite.
Feb  2 11:15:43.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:15:44.063: INFO: namespace: e2e-tests-projected-qzlcq, resource: bindings, ignored listing per whitelist
Feb  2 11:15:44.109: INFO: namespace e2e-tests-projected-qzlcq deletion completed in 6.199915661s

• [SLOW TEST:16.984 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:15:44.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:16:44.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-2k5g2" for this suite.
Feb  2 11:17:08.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:17:08.672: INFO: namespace: e2e-tests-container-probe-2k5g2, resource: bindings, ignored listing per whitelist
Feb  2 11:17:08.721: INFO: namespace e2e-tests-container-probe-2k5g2 deletion completed in 24.296284636s

• [SLOW TEST:84.612 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:17:08.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 11:17:08.870: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d286221-45ad-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-bmlfq" to be "success or failure"
Feb  2 11:17:08.891: INFO: Pod "downwardapi-volume-8d286221-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.025552ms
Feb  2 11:17:11.034: INFO: Pod "downwardapi-volume-8d286221-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163721122s
Feb  2 11:17:13.060: INFO: Pod "downwardapi-volume-8d286221-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189267747s
Feb  2 11:17:15.252: INFO: Pod "downwardapi-volume-8d286221-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38126815s
Feb  2 11:17:17.409: INFO: Pod "downwardapi-volume-8d286221-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538577583s
Feb  2 11:17:19.425: INFO: Pod "downwardapi-volume-8d286221-45ad-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.554591879s
STEP: Saw pod success
Feb  2 11:17:19.425: INFO: Pod "downwardapi-volume-8d286221-45ad-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:17:19.435: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8d286221-45ad-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 11:17:19.509: INFO: Waiting for pod downwardapi-volume-8d286221-45ad-11ea-8b99-0242ac110005 to disappear
Feb  2 11:17:19.515: INFO: Pod downwardapi-volume-8d286221-45ad-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:17:19.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bmlfq" for this suite.
Feb  2 11:17:25.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:17:25.836: INFO: namespace: e2e-tests-projected-bmlfq, resource: bindings, ignored listing per whitelist
Feb  2 11:17:25.868: INFO: namespace e2e-tests-projected-bmlfq deletion completed in 6.347321553s

• [SLOW TEST:17.147 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:17:25.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-9grz
STEP: Creating a pod to test atomic-volume-subpath
Feb  2 11:17:26.404: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9grz" in namespace "e2e-tests-subpath-zhl6n" to be "success or failure"
Feb  2 11:17:26.488: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Pending", Reason="", readiness=false. Elapsed: 84.436088ms
Feb  2 11:17:28.519: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114982435s
Feb  2 11:17:30.552: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148757731s
Feb  2 11:17:32.936: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.532303384s
Feb  2 11:17:34.951: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547510283s
Feb  2 11:17:36.964: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.560061277s
Feb  2 11:17:38.980: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.576201985s
Feb  2 11:17:40.995: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.591341533s
Feb  2 11:17:43.010: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Running", Reason="", readiness=false. Elapsed: 16.606284457s
Feb  2 11:17:45.023: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Running", Reason="", readiness=false. Elapsed: 18.619549128s
Feb  2 11:17:47.062: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Running", Reason="", readiness=false. Elapsed: 20.657800672s
Feb  2 11:17:49.077: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Running", Reason="", readiness=false. Elapsed: 22.673726217s
Feb  2 11:17:51.099: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Running", Reason="", readiness=false. Elapsed: 24.695507006s
Feb  2 11:17:53.115: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Running", Reason="", readiness=false. Elapsed: 26.710819806s
Feb  2 11:17:55.130: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Running", Reason="", readiness=false. Elapsed: 28.726021185s
Feb  2 11:17:57.144: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Running", Reason="", readiness=false. Elapsed: 30.740425807s
Feb  2 11:17:59.172: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Running", Reason="", readiness=false. Elapsed: 32.768038469s
Feb  2 11:18:01.189: INFO: Pod "pod-subpath-test-configmap-9grz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.784978506s
STEP: Saw pod success
Feb  2 11:18:01.189: INFO: Pod "pod-subpath-test-configmap-9grz" satisfied condition "success or failure"
Feb  2 11:18:01.195: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-9grz container test-container-subpath-configmap-9grz: 
STEP: delete the pod
Feb  2 11:18:01.317: INFO: Waiting for pod pod-subpath-test-configmap-9grz to disappear
Feb  2 11:18:01.410: INFO: Pod pod-subpath-test-configmap-9grz no longer exists
STEP: Deleting pod pod-subpath-test-configmap-9grz
Feb  2 11:18:01.410: INFO: Deleting pod "pod-subpath-test-configmap-9grz" in namespace "e2e-tests-subpath-zhl6n"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:18:01.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-zhl6n" for this suite.
Feb  2 11:18:09.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:18:09.495: INFO: namespace: e2e-tests-subpath-zhl6n, resource: bindings, ignored listing per whitelist
Feb  2 11:18:09.597: INFO: namespace e2e-tests-subpath-zhl6n deletion completed in 8.174130563s

• [SLOW TEST:43.729 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:18:09.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 11:18:09.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1899472-45ad-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-npth6" to be "success or failure"
Feb  2 11:18:09.936: INFO: Pod "downwardapi-volume-b1899472-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 47.595992ms
Feb  2 11:18:11.944: INFO: Pod "downwardapi-volume-b1899472-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05581422s
Feb  2 11:18:13.957: INFO: Pod "downwardapi-volume-b1899472-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069236353s
Feb  2 11:18:15.971: INFO: Pod "downwardapi-volume-b1899472-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083028338s
Feb  2 11:18:17.986: INFO: Pod "downwardapi-volume-b1899472-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098362104s
Feb  2 11:18:19.999: INFO: Pod "downwardapi-volume-b1899472-45ad-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111311758s
STEP: Saw pod success
Feb  2 11:18:19.999: INFO: Pod "downwardapi-volume-b1899472-45ad-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:18:20.006: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b1899472-45ad-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 11:18:20.098: INFO: Waiting for pod downwardapi-volume-b1899472-45ad-11ea-8b99-0242ac110005 to disappear
Feb  2 11:18:20.113: INFO: Pod downwardapi-volume-b1899472-45ad-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:18:20.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-npth6" for this suite.
Feb  2 11:18:28.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:18:28.454: INFO: namespace: e2e-tests-downward-api-npth6, resource: bindings, ignored listing per whitelist
Feb  2 11:18:28.649: INFO: namespace e2e-tests-downward-api-npth6 deletion completed in 8.521750482s

• [SLOW TEST:19.051 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:18:28.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-bcd33418-45ad-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 11:18:28.861: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bcd587b0-45ad-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-v5hg6" to be "success or failure"
Feb  2 11:18:28.891: INFO: Pod "pod-projected-configmaps-bcd587b0-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.847002ms
Feb  2 11:18:30.943: INFO: Pod "pod-projected-configmaps-bcd587b0-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081695901s
Feb  2 11:18:32.954: INFO: Pod "pod-projected-configmaps-bcd587b0-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093321379s
Feb  2 11:18:35.032: INFO: Pod "pod-projected-configmaps-bcd587b0-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.170588948s
Feb  2 11:18:37.044: INFO: Pod "pod-projected-configmaps-bcd587b0-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18258522s
Feb  2 11:18:39.316: INFO: Pod "pod-projected-configmaps-bcd587b0-45ad-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.454951786s
STEP: Saw pod success
Feb  2 11:18:39.316: INFO: Pod "pod-projected-configmaps-bcd587b0-45ad-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:18:39.327: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-bcd587b0-45ad-11ea-8b99-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  2 11:18:39.831: INFO: Waiting for pod pod-projected-configmaps-bcd587b0-45ad-11ea-8b99-0242ac110005 to disappear
Feb  2 11:18:39.867: INFO: Pod pod-projected-configmaps-bcd587b0-45ad-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:18:39.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v5hg6" for this suite.
Feb  2 11:18:45.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:18:45.996: INFO: namespace: e2e-tests-projected-v5hg6, resource: bindings, ignored listing per whitelist
Feb  2 11:18:46.063: INFO: namespace e2e-tests-projected-v5hg6 deletion completed in 6.185883301s

• [SLOW TEST:17.414 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:18:46.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-c7413c4e-45ad-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 11:18:46.316: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c742c6cd-45ad-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-wm2kp" to be "success or failure"
Feb  2 11:18:46.334: INFO: Pod "pod-projected-configmaps-c742c6cd-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.889235ms
Feb  2 11:18:48.376: INFO: Pod "pod-projected-configmaps-c742c6cd-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059945003s
Feb  2 11:18:50.408: INFO: Pod "pod-projected-configmaps-c742c6cd-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091820664s
Feb  2 11:18:52.436: INFO: Pod "pod-projected-configmaps-c742c6cd-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120028629s
Feb  2 11:18:54.491: INFO: Pod "pod-projected-configmaps-c742c6cd-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17452226s
Feb  2 11:18:56.513: INFO: Pod "pod-projected-configmaps-c742c6cd-45ad-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.197224279s
STEP: Saw pod success
Feb  2 11:18:56.514: INFO: Pod "pod-projected-configmaps-c742c6cd-45ad-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:18:56.532: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-c742c6cd-45ad-11ea-8b99-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  2 11:18:56.690: INFO: Waiting for pod pod-projected-configmaps-c742c6cd-45ad-11ea-8b99-0242ac110005 to disappear
Feb  2 11:18:56.703: INFO: Pod pod-projected-configmaps-c742c6cd-45ad-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:18:56.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wm2kp" for this suite.
Feb  2 11:19:02.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:19:02.974: INFO: namespace: e2e-tests-projected-wm2kp, resource: bindings, ignored listing per whitelist
Feb  2 11:19:03.104: INFO: namespace e2e-tests-projected-wm2kp deletion completed in 6.3494459s

• [SLOW TEST:17.041 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:19:03.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 11:19:03.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d157cd0b-45ad-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-hltww" to be "success or failure"
Feb  2 11:19:03.383: INFO: Pod "downwardapi-volume-d157cd0b-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 155.184152ms
Feb  2 11:19:05.403: INFO: Pod "downwardapi-volume-d157cd0b-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175515409s
Feb  2 11:19:07.417: INFO: Pod "downwardapi-volume-d157cd0b-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189173892s
Feb  2 11:19:09.644: INFO: Pod "downwardapi-volume-d157cd0b-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41619167s
Feb  2 11:19:11.651: INFO: Pod "downwardapi-volume-d157cd0b-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.423961536s
Feb  2 11:19:13.898: INFO: Pod "downwardapi-volume-d157cd0b-45ad-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.670458688s
STEP: Saw pod success
Feb  2 11:19:13.898: INFO: Pod "downwardapi-volume-d157cd0b-45ad-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:19:13.915: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d157cd0b-45ad-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 11:19:14.435: INFO: Waiting for pod downwardapi-volume-d157cd0b-45ad-11ea-8b99-0242ac110005 to disappear
Feb  2 11:19:14.518: INFO: Pod downwardapi-volume-d157cd0b-45ad-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:19:14.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hltww" for this suite.
Feb  2 11:19:20.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:19:20.683: INFO: namespace: e2e-tests-downward-api-hltww, resource: bindings, ignored listing per whitelist
Feb  2 11:19:20.881: INFO: namespace e2e-tests-downward-api-hltww deletion completed in 6.346812075s

• [SLOW TEST:17.777 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:19:20.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-84kgm A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-84kgm;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-84kgm A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-84kgm;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-84kgm.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-84kgm.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-84kgm.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-84kgm.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-84kgm.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-84kgm.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-84kgm.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-84kgm.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-84kgm.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 203.202.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.202.203_udp@PTR;check="$$(dig +tcp +noall +answer +search 203.202.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.202.203_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-84kgm A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-84kgm;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-84kgm A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-84kgm;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-84kgm.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-84kgm.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-84kgm.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-84kgm.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-84kgm.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-84kgm.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-84kgm.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-84kgm.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-84kgm.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 203.202.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.202.203_udp@PTR;check="$$(dig +tcp +noall +answer +search 203.202.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.202.203_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  2 11:19:37.575: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.586: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.604: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-84kgm from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.615: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-84kgm from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.623: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-84kgm.svc from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.631: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-84kgm.svc from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.637: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.642: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.647: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-84kgm.svc from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.651: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-84kgm.svc from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.656: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.662: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.668: INFO: Unable to read 10.102.202.203_udp@PTR from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.675: INFO: Unable to read 10.102.202.203_tcp@PTR from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.681: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.688: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.695: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-84kgm from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.703: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-84kgm from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.712: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-84kgm.svc from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.718: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-84kgm.svc from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.732: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.739: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.744: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-84kgm.svc from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.751: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-84kgm.svc from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.756: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.761: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.766: INFO: Unable to read 10.102.202.203_udp@PTR from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.770: INFO: Unable to read 10.102.202.203_tcp@PTR from pod e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005)
Feb  2 11:19:37.770: INFO: Lookups using e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-84kgm wheezy_tcp@dns-test-service.e2e-tests-dns-84kgm wheezy_udp@dns-test-service.e2e-tests-dns-84kgm.svc wheezy_tcp@dns-test-service.e2e-tests-dns-84kgm.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-84kgm.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-84kgm.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.102.202.203_udp@PTR 10.102.202.203_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-84kgm jessie_tcp@dns-test-service.e2e-tests-dns-84kgm jessie_udp@dns-test-service.e2e-tests-dns-84kgm.svc jessie_tcp@dns-test-service.e2e-tests-dns-84kgm.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-84kgm.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-84kgm.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-84kgm.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.102.202.203_udp@PTR 10.102.202.203_tcp@PTR]

Feb  2 11:19:43.382: INFO: DNS probes using e2e-tests-dns-84kgm/dns-test-dc0c6321-45ad-11ea-8b99-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:19:43.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-84kgm" for this suite.
Feb  2 11:19:49.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:19:50.039: INFO: namespace: e2e-tests-dns-84kgm, resource: bindings, ignored listing per whitelist
Feb  2 11:19:50.117: INFO: namespace e2e-tests-dns-84kgm deletion completed in 6.193504551s

• [SLOW TEST:29.237 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:19:50.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  2 11:19:50.341: INFO: Waiting up to 5m0s for pod "pod-ed6dff93-45ad-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-67lxz" to be "success or failure"
Feb  2 11:19:50.349: INFO: Pod "pod-ed6dff93-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132438ms
Feb  2 11:19:52.358: INFO: Pod "pod-ed6dff93-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017214427s
Feb  2 11:19:54.767: INFO: Pod "pod-ed6dff93-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.425995902s
Feb  2 11:19:56.788: INFO: Pod "pod-ed6dff93-45ad-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446810278s
Feb  2 11:19:59.201: INFO: Pod "pod-ed6dff93-45ad-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.860718893s
STEP: Saw pod success
Feb  2 11:19:59.202: INFO: Pod "pod-ed6dff93-45ad-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:19:59.208: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ed6dff93-45ad-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 11:19:59.903: INFO: Waiting for pod pod-ed6dff93-45ad-11ea-8b99-0242ac110005 to disappear
Feb  2 11:20:00.103: INFO: Pod pod-ed6dff93-45ad-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:20:00.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-67lxz" for this suite.
Feb  2 11:20:06.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:20:06.222: INFO: namespace: e2e-tests-emptydir-67lxz, resource: bindings, ignored listing per whitelist
Feb  2 11:20:06.340: INFO: namespace e2e-tests-emptydir-67lxz deletion completed in 6.220501452s

• [SLOW TEST:16.222 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:20:06.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 11:20:06.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb  2 11:20:06.761: INFO: stderr: ""
Feb  2 11:20:06.761: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:20:06.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xmcsq" for this suite.
Feb  2 11:20:12.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:20:12.970: INFO: namespace: e2e-tests-kubectl-xmcsq, resource: bindings, ignored listing per whitelist
Feb  2 11:20:13.011: INFO: namespace e2e-tests-kubectl-xmcsq deletion completed in 6.238308384s

• [SLOW TEST:6.671 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:20:13.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:20:21.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-xxzwp" for this suite.
Feb  2 11:20:27.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:20:27.677: INFO: namespace: e2e-tests-emptydir-wrapper-xxzwp, resource: bindings, ignored listing per whitelist
Feb  2 11:20:27.778: INFO: namespace e2e-tests-emptydir-wrapper-xxzwp deletion completed in 6.200356043s

• [SLOW TEST:14.766 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:20:27.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-03e77b41-45ae-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 11:20:28.053: INFO: Waiting up to 5m0s for pod "pod-secrets-03e8e3c9-45ae-11ea-8b99-0242ac110005" in namespace "e2e-tests-secrets-rv84x" to be "success or failure"
Feb  2 11:20:28.065: INFO: Pod "pod-secrets-03e8e3c9-45ae-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.038669ms
Feb  2 11:20:30.135: INFO: Pod "pod-secrets-03e8e3c9-45ae-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082257784s
Feb  2 11:20:32.149: INFO: Pod "pod-secrets-03e8e3c9-45ae-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095971174s
Feb  2 11:20:34.292: INFO: Pod "pod-secrets-03e8e3c9-45ae-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239370085s
Feb  2 11:20:36.351: INFO: Pod "pod-secrets-03e8e3c9-45ae-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.29786024s
STEP: Saw pod success
Feb  2 11:20:36.351: INFO: Pod "pod-secrets-03e8e3c9-45ae-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:20:36.368: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-03e8e3c9-45ae-11ea-8b99-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  2 11:20:36.451: INFO: Waiting for pod pod-secrets-03e8e3c9-45ae-11ea-8b99-0242ac110005 to disappear
Feb  2 11:20:36.575: INFO: Pod pod-secrets-03e8e3c9-45ae-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:20:36.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-rv84x" for this suite.
Feb  2 11:20:44.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:20:44.981: INFO: namespace: e2e-tests-secrets-rv84x, resource: bindings, ignored listing per whitelist
Feb  2 11:20:45.112: INFO: namespace e2e-tests-secrets-rv84x deletion completed in 8.506657696s

• [SLOW TEST:17.333 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:20:45.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  2 11:20:54.080: INFO: Successfully updated pod "annotationupdate0e3b7dd9-45ae-11ea-8b99-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:20:56.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-l52f5" for this suite.
Feb  2 11:21:20.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:21:20.278: INFO: namespace: e2e-tests-downward-api-l52f5, resource: bindings, ignored listing per whitelist
Feb  2 11:21:20.324: INFO: namespace e2e-tests-downward-api-l52f5 deletion completed in 24.168765207s

• [SLOW TEST:35.212 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:21:20.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Feb  2 11:21:20.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  2 11:21:20.856: INFO: stderr: ""
Feb  2 11:21:20.856: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:21:20.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7bcff" for this suite.
Feb  2 11:21:26.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:21:27.002: INFO: namespace: e2e-tests-kubectl-7bcff, resource: bindings, ignored listing per whitelist
Feb  2 11:21:27.169: INFO: namespace e2e-tests-kubectl-7bcff deletion completed in 6.260568945s

• [SLOW TEST:6.845 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:21:27.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0202 11:21:58.811272       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  2 11:21:58.811: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:21:58.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-mnnw8" for this suite.
Feb  2 11:22:08.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:22:09.277: INFO: namespace: e2e-tests-gc-mnnw8, resource: bindings, ignored listing per whitelist
Feb  2 11:22:09.402: INFO: namespace e2e-tests-gc-mnnw8 deletion completed in 10.586521493s

• [SLOW TEST:42.233 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:22:09.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  2 11:22:10.266: INFO: Waiting up to 5m0s for pod "pod-40cc3782-45ae-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-jrm65" to be "success or failure"
Feb  2 11:22:10.344: INFO: Pod "pod-40cc3782-45ae-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 77.984192ms
Feb  2 11:22:12.357: INFO: Pod "pod-40cc3782-45ae-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090715131s
Feb  2 11:22:14.887: INFO: Pod "pod-40cc3782-45ae-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.620798022s
Feb  2 11:22:16.911: INFO: Pod "pod-40cc3782-45ae-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.644899955s
Feb  2 11:22:18.927: INFO: Pod "pod-40cc3782-45ae-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.660548523s
STEP: Saw pod success
Feb  2 11:22:18.927: INFO: Pod "pod-40cc3782-45ae-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:22:18.960: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-40cc3782-45ae-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 11:22:19.279: INFO: Waiting for pod pod-40cc3782-45ae-11ea-8b99-0242ac110005 to disappear
Feb  2 11:22:19.293: INFO: Pod pod-40cc3782-45ae-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:22:19.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jrm65" for this suite.
Feb  2 11:22:25.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:22:25.554: INFO: namespace: e2e-tests-emptydir-jrm65, resource: bindings, ignored listing per whitelist
Feb  2 11:22:25.577: INFO: namespace e2e-tests-emptydir-jrm65 deletion completed in 6.260049878s

• [SLOW TEST:16.174 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:22:25.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 11:22:25.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:22:35.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ckmbx" for this suite.
Feb  2 11:23:25.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:23:26.165: INFO: namespace: e2e-tests-pods-ckmbx, resource: bindings, ignored listing per whitelist
Feb  2 11:23:26.173: INFO: namespace e2e-tests-pods-ckmbx deletion completed in 50.217612831s

• [SLOW TEST:60.596 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:23:26.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-x7xkb.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-x7xkb.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-x7xkb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-x7xkb.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-x7xkb.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-x7xkb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  2 11:23:40.713: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-x7xkb/dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005)
Feb  2 11:23:40.717: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-x7xkb/dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005)
Feb  2 11:23:40.721: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-x7xkb/dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005)
Feb  2 11:23:40.726: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-x7xkb/dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005)
Feb  2 11:23:40.732: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-x7xkb/dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005)
Feb  2 11:23:40.737: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-x7xkb/dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005)
Feb  2 11:23:40.742: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-x7xkb.svc.cluster.local from pod e2e-tests-dns-x7xkb/dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005)
Feb  2 11:23:40.750: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-x7xkb/dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005)
Feb  2 11:23:40.753: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-x7xkb/dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005)
Feb  2 11:23:40.757: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-x7xkb/dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005: the server could not find the requested resource (get pods dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005)
Feb  2 11:23:40.757: INFO: Lookups using e2e-tests-dns-x7xkb/dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-x7xkb.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  2 11:23:45.954: INFO: DNS probes using e2e-tests-dns-x7xkb/dns-test-6e3f0b7a-45ae-11ea-8b99-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:23:46.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-x7xkb" for this suite.
Feb  2 11:23:52.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:23:52.360: INFO: namespace: e2e-tests-dns-x7xkb, resource: bindings, ignored listing per whitelist
Feb  2 11:23:52.422: INFO: namespace e2e-tests-dns-x7xkb deletion completed in 6.370180267s

• [SLOW TEST:26.248 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:23:52.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-5kxdx
I0202 11:23:52.891085       9 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-5kxdx, replica count: 1
I0202 11:23:53.941748       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 11:23:54.942219       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 11:23:55.942628       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 11:23:56.943024       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 11:23:57.943424       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 11:23:58.943733       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 11:23:59.944072       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 11:24:00.944395       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 11:24:01.944655       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  2 11:24:02.301: INFO: Created: latency-svc-c6vc6
Feb  2 11:24:02.333: INFO: Got endpoints: latency-svc-c6vc6 [288.064143ms]
Feb  2 11:24:02.593: INFO: Created: latency-svc-bcdnd
Feb  2 11:24:02.740: INFO: Got endpoints: latency-svc-bcdnd [405.883718ms]
Feb  2 11:24:02.767: INFO: Created: latency-svc-2zq4z
Feb  2 11:24:02.789: INFO: Got endpoints: latency-svc-2zq4z [454.974356ms]
Feb  2 11:24:03.030: INFO: Created: latency-svc-rhhzl
Feb  2 11:24:03.048: INFO: Got endpoints: latency-svc-rhhzl [715.171838ms]
Feb  2 11:24:03.118: INFO: Created: latency-svc-6wwmw
Feb  2 11:24:03.299: INFO: Got endpoints: latency-svc-6wwmw [965.770959ms]
Feb  2 11:24:03.320: INFO: Created: latency-svc-flv2w
Feb  2 11:24:03.350: INFO: Got endpoints: latency-svc-flv2w [1.015818941s]
Feb  2 11:24:03.521: INFO: Created: latency-svc-tg698
Feb  2 11:24:03.529: INFO: Got endpoints: latency-svc-tg698 [1.195680636s]
Feb  2 11:24:03.710: INFO: Created: latency-svc-5sgqp
Feb  2 11:24:03.753: INFO: Got endpoints: latency-svc-5sgqp [1.419850557s]
Feb  2 11:24:03.958: INFO: Created: latency-svc-8kgh5
Feb  2 11:24:03.982: INFO: Got endpoints: latency-svc-8kgh5 [1.648630454s]
Feb  2 11:24:04.187: INFO: Created: latency-svc-mczc7
Feb  2 11:24:04.207: INFO: Got endpoints: latency-svc-mczc7 [1.87324428s]
Feb  2 11:24:04.394: INFO: Created: latency-svc-rnb7p
Feb  2 11:24:04.403: INFO: Got endpoints: latency-svc-rnb7p [2.06921036s]
Feb  2 11:24:04.603: INFO: Created: latency-svc-56866
Feb  2 11:24:04.635: INFO: Got endpoints: latency-svc-56866 [2.301309814s]
Feb  2 11:24:04.781: INFO: Created: latency-svc-mfbql
Feb  2 11:24:04.803: INFO: Got endpoints: latency-svc-mfbql [2.469343741s]
Feb  2 11:24:04.887: INFO: Created: latency-svc-t59bl
Feb  2 11:24:05.058: INFO: Got endpoints: latency-svc-t59bl [2.724885802s]
Feb  2 11:24:05.106: INFO: Created: latency-svc-bg9ls
Feb  2 11:24:05.130: INFO: Got endpoints: latency-svc-bg9ls [2.796397094s]
Feb  2 11:24:05.268: INFO: Created: latency-svc-d758c
Feb  2 11:24:05.300: INFO: Got endpoints: latency-svc-d758c [2.966156914s]
Feb  2 11:24:05.469: INFO: Created: latency-svc-lt6l8
Feb  2 11:24:05.526: INFO: Got endpoints: latency-svc-lt6l8 [2.786417605s]
Feb  2 11:24:05.705: INFO: Created: latency-svc-f5dr8
Feb  2 11:24:05.709: INFO: Got endpoints: latency-svc-f5dr8 [2.92001094s]
Feb  2 11:24:05.800: INFO: Created: latency-svc-h8f4h
Feb  2 11:24:05.923: INFO: Got endpoints: latency-svc-h8f4h [2.875050399s]
Feb  2 11:24:05.966: INFO: Created: latency-svc-8bdbs
Feb  2 11:24:05.980: INFO: Got endpoints: latency-svc-8bdbs [2.681032112s]
Feb  2 11:24:06.141: INFO: Created: latency-svc-sqnbq
Feb  2 11:24:06.166: INFO: Got endpoints: latency-svc-sqnbq [2.816605005s]
Feb  2 11:24:06.388: INFO: Created: latency-svc-whjrq
Feb  2 11:24:06.427: INFO: Got endpoints: latency-svc-whjrq [2.897662769s]
Feb  2 11:24:06.581: INFO: Created: latency-svc-8pspx
Feb  2 11:24:06.631: INFO: Got endpoints: latency-svc-8pspx [2.878186484s]
Feb  2 11:24:06.820: INFO: Created: latency-svc-cqsjc
Feb  2 11:24:06.861: INFO: Got endpoints: latency-svc-cqsjc [2.879182621s]
Feb  2 11:24:07.024: INFO: Created: latency-svc-phqgx
Feb  2 11:24:07.050: INFO: Got endpoints: latency-svc-phqgx [2.843077621s]
Feb  2 11:24:07.104: INFO: Created: latency-svc-2658t
Feb  2 11:24:07.208: INFO: Got endpoints: latency-svc-2658t [2.805181823s]
Feb  2 11:24:07.252: INFO: Created: latency-svc-2ztpt
Feb  2 11:24:07.267: INFO: Got endpoints: latency-svc-2ztpt [2.632007629s]
Feb  2 11:24:07.404: INFO: Created: latency-svc-gzf42
Feb  2 11:24:07.492: INFO: Got endpoints: latency-svc-gzf42 [2.688538703s]
Feb  2 11:24:07.528: INFO: Created: latency-svc-znsl5
Feb  2 11:24:07.658: INFO: Got endpoints: latency-svc-znsl5 [2.599390503s]
Feb  2 11:24:07.674: INFO: Created: latency-svc-92fs8
Feb  2 11:24:07.701: INFO: Got endpoints: latency-svc-92fs8 [2.569995456s]
Feb  2 11:24:07.831: INFO: Created: latency-svc-q9m4k
Feb  2 11:24:07.857: INFO: Got endpoints: latency-svc-q9m4k [2.556552062s]
Feb  2 11:24:08.070: INFO: Created: latency-svc-hsjzz
Feb  2 11:24:08.086: INFO: Got endpoints: latency-svc-hsjzz [2.55923659s]
Feb  2 11:24:08.114: INFO: Created: latency-svc-g2lrp
Feb  2 11:24:08.126: INFO: Got endpoints: latency-svc-g2lrp [2.41641019s]
Feb  2 11:24:08.407: INFO: Created: latency-svc-rlqp2
Feb  2 11:24:08.429: INFO: Got endpoints: latency-svc-rlqp2 [343.252352ms]
Feb  2 11:24:08.807: INFO: Created: latency-svc-6kds4
Feb  2 11:24:08.809: INFO: Got endpoints: latency-svc-6kds4 [2.885335638s]
Feb  2 11:24:09.020: INFO: Created: latency-svc-cjrg2
Feb  2 11:24:09.175: INFO: Got endpoints: latency-svc-cjrg2 [3.194748312s]
Feb  2 11:24:09.193: INFO: Created: latency-svc-pgqnv
Feb  2 11:24:09.235: INFO: Got endpoints: latency-svc-pgqnv [3.068292516s]
Feb  2 11:24:09.395: INFO: Created: latency-svc-p5stl
Feb  2 11:24:09.405: INFO: Got endpoints: latency-svc-p5stl [2.977388323s]
Feb  2 11:24:09.455: INFO: Created: latency-svc-8ld26
Feb  2 11:24:09.478: INFO: Got endpoints: latency-svc-8ld26 [2.846665353s]
Feb  2 11:24:09.629: INFO: Created: latency-svc-b6dzv
Feb  2 11:24:09.646: INFO: Got endpoints: latency-svc-b6dzv [2.784631305s]
Feb  2 11:24:09.709: INFO: Created: latency-svc-dnrl9
Feb  2 11:24:09.718: INFO: Got endpoints: latency-svc-dnrl9 [2.667376639s]
Feb  2 11:24:09.888: INFO: Created: latency-svc-5r5ct
Feb  2 11:24:09.944: INFO: Got endpoints: latency-svc-5r5ct [2.735036405s]
Feb  2 11:24:09.950: INFO: Created: latency-svc-5p6ms
Feb  2 11:24:10.094: INFO: Got endpoints: latency-svc-5p6ms [2.826274595s]
Feb  2 11:24:10.113: INFO: Created: latency-svc-p99g4
Feb  2 11:24:10.138: INFO: Got endpoints: latency-svc-p99g4 [2.646460074s]
Feb  2 11:24:10.184: INFO: Created: latency-svc-w2rqr
Feb  2 11:24:10.291: INFO: Got endpoints: latency-svc-w2rqr [2.632702238s]
Feb  2 11:24:10.314: INFO: Created: latency-svc-qxcpd
Feb  2 11:24:10.330: INFO: Got endpoints: latency-svc-qxcpd [2.62934648s]
Feb  2 11:24:10.376: INFO: Created: latency-svc-jnn5t
Feb  2 11:24:10.504: INFO: Got endpoints: latency-svc-jnn5t [2.646707417s]
Feb  2 11:24:10.585: INFO: Created: latency-svc-7mfgl
Feb  2 11:24:10.712: INFO: Got endpoints: latency-svc-7mfgl [2.58614298s]
Feb  2 11:24:10.734: INFO: Created: latency-svc-xzlxh
Feb  2 11:24:10.763: INFO: Got endpoints: latency-svc-xzlxh [2.333005158s]
Feb  2 11:24:10.799: INFO: Created: latency-svc-v6xhc
Feb  2 11:24:10.981: INFO: Got endpoints: latency-svc-v6xhc [2.171694339s]
Feb  2 11:24:10.997: INFO: Created: latency-svc-sj8k4
Feb  2 11:24:11.021: INFO: Got endpoints: latency-svc-sj8k4 [1.845167911s]
Feb  2 11:24:11.186: INFO: Created: latency-svc-vjms7
Feb  2 11:24:11.210: INFO: Got endpoints: latency-svc-vjms7 [1.974826071s]
Feb  2 11:24:11.539: INFO: Created: latency-svc-qvmrd
Feb  2 11:24:11.573: INFO: Got endpoints: latency-svc-qvmrd [2.168011499s]
Feb  2 11:24:11.715: INFO: Created: latency-svc-hxm4c
Feb  2 11:24:11.733: INFO: Got endpoints: latency-svc-hxm4c [2.255079336s]
Feb  2 11:24:11.951: INFO: Created: latency-svc-8cfmv
Feb  2 11:24:11.995: INFO: Got endpoints: latency-svc-8cfmv [2.348747364s]
Feb  2 11:24:12.060: INFO: Created: latency-svc-8vr2v
Feb  2 11:24:12.215: INFO: Got endpoints: latency-svc-8vr2v [2.496739347s]
Feb  2 11:24:12.239: INFO: Created: latency-svc-96cnp
Feb  2 11:24:12.285: INFO: Got endpoints: latency-svc-96cnp [2.340897739s]
Feb  2 11:24:12.427: INFO: Created: latency-svc-xmptz
Feb  2 11:24:12.616: INFO: Created: latency-svc-lbssx
Feb  2 11:24:12.641: INFO: Got endpoints: latency-svc-xmptz [2.547339443s]
Feb  2 11:24:12.654: INFO: Got endpoints: latency-svc-lbssx [2.515591318s]
Feb  2 11:24:12.766: INFO: Created: latency-svc-mxcgh
Feb  2 11:24:12.800: INFO: Got endpoints: latency-svc-mxcgh [2.508433158s]
Feb  2 11:24:13.010: INFO: Created: latency-svc-tpm4q
Feb  2 11:24:13.021: INFO: Got endpoints: latency-svc-tpm4q [2.690485159s]
Feb  2 11:24:13.167: INFO: Created: latency-svc-m6mjj
Feb  2 11:24:13.168: INFO: Got endpoints: latency-svc-m6mjj [2.663956455s]
Feb  2 11:24:13.242: INFO: Created: latency-svc-h8ph8
Feb  2 11:24:13.330: INFO: Got endpoints: latency-svc-h8ph8 [2.617745354s]
Feb  2 11:24:13.373: INFO: Created: latency-svc-hqswr
Feb  2 11:24:13.384: INFO: Got endpoints: latency-svc-hqswr [2.621797485s]
Feb  2 11:24:13.538: INFO: Created: latency-svc-6hg42
Feb  2 11:24:13.553: INFO: Got endpoints: latency-svc-6hg42 [2.572519783s]
Feb  2 11:24:13.613: INFO: Created: latency-svc-fwm6r
Feb  2 11:24:13.631: INFO: Got endpoints: latency-svc-fwm6r [2.609991517s]
Feb  2 11:24:13.764: INFO: Created: latency-svc-vxtcg
Feb  2 11:24:13.795: INFO: Got endpoints: latency-svc-vxtcg [2.584616909s]
Feb  2 11:24:13.983: INFO: Created: latency-svc-zsjmj
Feb  2 11:24:14.539: INFO: Got endpoints: latency-svc-zsjmj [2.96576318s]
Feb  2 11:24:14.621: INFO: Created: latency-svc-7wd2q
Feb  2 11:24:14.996: INFO: Got endpoints: latency-svc-7wd2q [3.262735534s]
Feb  2 11:24:15.068: INFO: Created: latency-svc-qj6kg
Feb  2 11:24:15.132: INFO: Got endpoints: latency-svc-qj6kg [3.137228169s]
Feb  2 11:24:17.162: INFO: Created: latency-svc-cl64f
Feb  2 11:24:17.184: INFO: Got endpoints: latency-svc-cl64f [4.968811853s]
Feb  2 11:24:17.361: INFO: Created: latency-svc-px2z6
Feb  2 11:24:17.361: INFO: Got endpoints: latency-svc-px2z6 [5.076094489s]
Feb  2 11:24:17.406: INFO: Created: latency-svc-4nj4h
Feb  2 11:24:17.527: INFO: Got endpoints: latency-svc-4nj4h [4.885128236s]
Feb  2 11:24:17.557: INFO: Created: latency-svc-7pcjw
Feb  2 11:24:17.572: INFO: Got endpoints: latency-svc-7pcjw [4.917627514s]
Feb  2 11:24:17.745: INFO: Created: latency-svc-4bc5n
Feb  2 11:24:17.766: INFO: Got endpoints: latency-svc-4bc5n [4.965992906s]
Feb  2 11:24:17.827: INFO: Created: latency-svc-cjc44
Feb  2 11:24:17.953: INFO: Got endpoints: latency-svc-cjc44 [4.93238894s]
Feb  2 11:24:17.980: INFO: Created: latency-svc-7p8v5
Feb  2 11:24:18.016: INFO: Got endpoints: latency-svc-7p8v5 [4.848096709s]
Feb  2 11:24:18.201: INFO: Created: latency-svc-nkjtr
Feb  2 11:24:18.235: INFO: Got endpoints: latency-svc-nkjtr [4.904618688s]
Feb  2 11:24:18.411: INFO: Created: latency-svc-fnxjf
Feb  2 11:24:18.431: INFO: Got endpoints: latency-svc-fnxjf [5.046069805s]
Feb  2 11:24:18.653: INFO: Created: latency-svc-82sn5
Feb  2 11:24:18.715: INFO: Got endpoints: latency-svc-82sn5 [5.161231576s]
Feb  2 11:24:18.717: INFO: Created: latency-svc-kt25w
Feb  2 11:24:18.859: INFO: Got endpoints: latency-svc-kt25w [5.227714839s]
Feb  2 11:24:18.893: INFO: Created: latency-svc-lpn9v
Feb  2 11:24:18.907: INFO: Got endpoints: latency-svc-lpn9v [5.111652277s]
Feb  2 11:24:19.088: INFO: Created: latency-svc-r9scz
Feb  2 11:24:19.099: INFO: Got endpoints: latency-svc-r9scz [4.559834165s]
Feb  2 11:24:19.172: INFO: Created: latency-svc-gbrxz
Feb  2 11:24:19.366: INFO: Got endpoints: latency-svc-gbrxz [4.368874883s]
Feb  2 11:24:19.415: INFO: Created: latency-svc-8cttc
Feb  2 11:24:19.438: INFO: Got endpoints: latency-svc-8cttc [4.305898562s]
Feb  2 11:24:19.610: INFO: Created: latency-svc-q8f2x
Feb  2 11:24:19.653: INFO: Got endpoints: latency-svc-q8f2x [2.469428938s]
Feb  2 11:24:19.679: INFO: Created: latency-svc-mr4zq
Feb  2 11:24:19.829: INFO: Got endpoints: latency-svc-mr4zq [2.467429715s]
Feb  2 11:24:19.867: INFO: Created: latency-svc-j2lcm
Feb  2 11:24:19.891: INFO: Got endpoints: latency-svc-j2lcm [2.364498399s]
Feb  2 11:24:20.004: INFO: Created: latency-svc-5j4xf
Feb  2 11:24:20.029: INFO: Got endpoints: latency-svc-5j4xf [2.457163145s]
Feb  2 11:24:20.189: INFO: Created: latency-svc-59jrw
Feb  2 11:24:20.200: INFO: Got endpoints: latency-svc-59jrw [2.433792823s]
Feb  2 11:24:20.289: INFO: Created: latency-svc-lcq8b
Feb  2 11:24:20.335: INFO: Got endpoints: latency-svc-lcq8b [2.381333165s]
Feb  2 11:24:20.385: INFO: Created: latency-svc-p55sk
Feb  2 11:24:20.413: INFO: Got endpoints: latency-svc-p55sk [2.39698153s]
Feb  2 11:24:20.615: INFO: Created: latency-svc-7v54v
Feb  2 11:24:20.615: INFO: Got endpoints: latency-svc-7v54v [2.380383726s]
Feb  2 11:24:20.903: INFO: Created: latency-svc-jxnjc
Feb  2 11:24:20.913: INFO: Got endpoints: latency-svc-jxnjc [2.481996078s]
Feb  2 11:24:21.128: INFO: Created: latency-svc-fzctg
Feb  2 11:24:21.136: INFO: Got endpoints: latency-svc-fzctg [2.421152292s]
Feb  2 11:24:21.454: INFO: Created: latency-svc-2k8jn
Feb  2 11:24:21.556: INFO: Created: latency-svc-gz2ls
Feb  2 11:24:21.560: INFO: Got endpoints: latency-svc-2k8jn [2.700654279s]
Feb  2 11:24:21.725: INFO: Created: latency-svc-swdx5
Feb  2 11:24:21.734: INFO: Got endpoints: latency-svc-gz2ls [2.827679899s]
Feb  2 11:24:21.763: INFO: Got endpoints: latency-svc-swdx5 [2.664538679s]
Feb  2 11:24:21.813: INFO: Created: latency-svc-wnh7c
Feb  2 11:24:21.939: INFO: Got endpoints: latency-svc-wnh7c [2.573411788s]
Feb  2 11:24:21.981: INFO: Created: latency-svc-4gvlk
Feb  2 11:24:22.007: INFO: Got endpoints: latency-svc-4gvlk [2.568629929s]
Feb  2 11:24:22.113: INFO: Created: latency-svc-nqrkm
Feb  2 11:24:22.178: INFO: Got endpoints: latency-svc-nqrkm [2.524604931s]
Feb  2 11:24:22.342: INFO: Created: latency-svc-cwqln
Feb  2 11:24:22.363: INFO: Got endpoints: latency-svc-cwqln [2.534056335s]
Feb  2 11:24:22.637: INFO: Created: latency-svc-lnm5m
Feb  2 11:24:22.812: INFO: Got endpoints: latency-svc-lnm5m [2.920176037s]
Feb  2 11:24:22.842: INFO: Created: latency-svc-mncds
Feb  2 11:24:22.910: INFO: Created: latency-svc-6lwpz
Feb  2 11:24:23.093: INFO: Created: latency-svc-rhpxx
Feb  2 11:24:23.110: INFO: Got endpoints: latency-svc-mncds [3.081153431s]
Feb  2 11:24:23.118: INFO: Got endpoints: latency-svc-rhpxx [2.783497007s]
Feb  2 11:24:23.138: INFO: Got endpoints: latency-svc-6lwpz [2.937616671s]
Feb  2 11:24:23.157: INFO: Created: latency-svc-vs2pb
Feb  2 11:24:23.173: INFO: Got endpoints: latency-svc-vs2pb [2.759955273s]
Feb  2 11:24:23.324: INFO: Created: latency-svc-qh5sm
Feb  2 11:24:23.350: INFO: Got endpoints: latency-svc-qh5sm [2.734161012s]
Feb  2 11:24:23.395: INFO: Created: latency-svc-9w7jz
Feb  2 11:24:23.482: INFO: Got endpoints: latency-svc-9w7jz [2.56903364s]
Feb  2 11:24:23.507: INFO: Created: latency-svc-6vpth
Feb  2 11:24:23.525: INFO: Got endpoints: latency-svc-6vpth [2.388999054s]
Feb  2 11:24:23.583: INFO: Created: latency-svc-4q2m5
Feb  2 11:24:23.774: INFO: Got endpoints: latency-svc-4q2m5 [2.213888941s]
Feb  2 11:24:23.803: INFO: Created: latency-svc-6wkz2
Feb  2 11:24:23.835: INFO: Got endpoints: latency-svc-6wkz2 [2.100328011s]
Feb  2 11:24:24.028: INFO: Created: latency-svc-9kf4x
Feb  2 11:24:24.047: INFO: Got endpoints: latency-svc-9kf4x [2.28327499s]
Feb  2 11:24:24.200: INFO: Created: latency-svc-qscpm
Feb  2 11:24:24.214: INFO: Got endpoints: latency-svc-qscpm [2.274777425s]
Feb  2 11:24:24.304: INFO: Created: latency-svc-c8npl
Feb  2 11:24:24.378: INFO: Got endpoints: latency-svc-c8npl [2.370802856s]
Feb  2 11:24:24.425: INFO: Created: latency-svc-52p95
Feb  2 11:24:24.454: INFO: Got endpoints: latency-svc-52p95 [2.275147664s]
Feb  2 11:24:24.641: INFO: Created: latency-svc-4nffn
Feb  2 11:24:24.690: INFO: Got endpoints: latency-svc-4nffn [2.327429825s]
Feb  2 11:24:24.899: INFO: Created: latency-svc-x64hf
Feb  2 11:24:24.909: INFO: Got endpoints: latency-svc-x64hf [2.097317464s]
Feb  2 11:24:24.964: INFO: Created: latency-svc-85kdq
Feb  2 11:24:25.168: INFO: Got endpoints: latency-svc-85kdq [2.057880684s]
Feb  2 11:24:25.183: INFO: Created: latency-svc-pl6lx
Feb  2 11:24:25.430: INFO: Got endpoints: latency-svc-pl6lx [2.311725872s]
Feb  2 11:24:25.510: INFO: Created: latency-svc-mpmbv
Feb  2 11:24:25.620: INFO: Got endpoints: latency-svc-mpmbv [2.482628598s]
Feb  2 11:24:25.647: INFO: Created: latency-svc-4lcrk
Feb  2 11:24:25.672: INFO: Got endpoints: latency-svc-4lcrk [2.498308579s]
Feb  2 11:24:25.712: INFO: Created: latency-svc-htckz
Feb  2 11:24:25.831: INFO: Got endpoints: latency-svc-htckz [2.481304945s]
Feb  2 11:24:25.857: INFO: Created: latency-svc-xzj9s
Feb  2 11:24:25.902: INFO: Got endpoints: latency-svc-xzj9s [2.419764052s]
Feb  2 11:24:26.044: INFO: Created: latency-svc-rbln8
Feb  2 11:24:26.054: INFO: Got endpoints: latency-svc-rbln8 [2.528489165s]
Feb  2 11:24:26.108: INFO: Created: latency-svc-65zrm
Feb  2 11:24:26.110: INFO: Got endpoints: latency-svc-65zrm [2.335525644s]
Feb  2 11:24:26.272: INFO: Created: latency-svc-86rhf
Feb  2 11:24:26.304: INFO: Got endpoints: latency-svc-86rhf [2.468929566s]
Feb  2 11:24:26.566: INFO: Created: latency-svc-lcbc7
Feb  2 11:24:26.582: INFO: Got endpoints: latency-svc-lcbc7 [2.535174708s]
Feb  2 11:24:26.705: INFO: Created: latency-svc-kh9ss
Feb  2 11:24:26.724: INFO: Got endpoints: latency-svc-kh9ss [2.509767879s]
Feb  2 11:24:26.902: INFO: Created: latency-svc-pxgcd
Feb  2 11:24:26.937: INFO: Got endpoints: latency-svc-pxgcd [2.559176408s]
Feb  2 11:24:27.071: INFO: Created: latency-svc-lk7qx
Feb  2 11:24:27.083: INFO: Got endpoints: latency-svc-lk7qx [2.628585986s]
Feb  2 11:24:27.158: INFO: Created: latency-svc-t9l77
Feb  2 11:24:27.216: INFO: Got endpoints: latency-svc-t9l77 [2.52507496s]
Feb  2 11:24:27.276: INFO: Created: latency-svc-jsmp8
Feb  2 11:24:27.307: INFO: Got endpoints: latency-svc-jsmp8 [2.397708079s]
Feb  2 11:24:27.506: INFO: Created: latency-svc-p7qld
Feb  2 11:24:27.522: INFO: Got endpoints: latency-svc-p7qld [2.352915557s]
Feb  2 11:24:27.740: INFO: Created: latency-svc-rltpz
Feb  2 11:24:27.750: INFO: Got endpoints: latency-svc-rltpz [2.319204571s]
Feb  2 11:24:27.811: INFO: Created: latency-svc-rf24c
Feb  2 11:24:27.924: INFO: Got endpoints: latency-svc-rf24c [2.303626522s]
Feb  2 11:24:27.940: INFO: Created: latency-svc-d7qm4
Feb  2 11:24:27.973: INFO: Got endpoints: latency-svc-d7qm4 [2.301087786s]
Feb  2 11:24:28.139: INFO: Created: latency-svc-4rhnf
Feb  2 11:24:28.139: INFO: Got endpoints: latency-svc-4rhnf [2.307311804s]
Feb  2 11:24:28.336: INFO: Created: latency-svc-6rgg4
Feb  2 11:24:28.341: INFO: Got endpoints: latency-svc-6rgg4 [2.438868597s]
Feb  2 11:24:28.423: INFO: Created: latency-svc-mqhgd
Feb  2 11:24:28.527: INFO: Got endpoints: latency-svc-mqhgd [2.472853925s]
Feb  2 11:24:28.566: INFO: Created: latency-svc-pwlwq
Feb  2 11:24:28.576: INFO: Got endpoints: latency-svc-pwlwq [2.466135681s]
Feb  2 11:24:28.738: INFO: Created: latency-svc-x87c2
Feb  2 11:24:28.749: INFO: Got endpoints: latency-svc-x87c2 [2.444889804s]
Feb  2 11:24:28.800: INFO: Created: latency-svc-frqcp
Feb  2 11:24:28.809: INFO: Got endpoints: latency-svc-frqcp [2.226606796s]
Feb  2 11:24:28.945: INFO: Created: latency-svc-g7n5w
Feb  2 11:24:30.193: INFO: Got endpoints: latency-svc-g7n5w [3.469319619s]
Feb  2 11:24:30.301: INFO: Created: latency-svc-nc6h8
Feb  2 11:24:30.403: INFO: Got endpoints: latency-svc-nc6h8 [3.465434899s]
Feb  2 11:24:30.449: INFO: Created: latency-svc-xzppd
Feb  2 11:24:30.631: INFO: Got endpoints: latency-svc-xzppd [3.547978414s]
Feb  2 11:24:30.671: INFO: Created: latency-svc-qbgv5
Feb  2 11:24:30.694: INFO: Got endpoints: latency-svc-qbgv5 [3.477869829s]
Feb  2 11:24:30.804: INFO: Created: latency-svc-jl54j
Feb  2 11:24:30.834: INFO: Got endpoints: latency-svc-jl54j [3.526612334s]
Feb  2 11:24:31.071: INFO: Created: latency-svc-s6dl4
Feb  2 11:24:31.079: INFO: Got endpoints: latency-svc-s6dl4 [3.557116683s]
Feb  2 11:24:31.180: INFO: Created: latency-svc-wpgtl
Feb  2 11:24:31.811: INFO: Got endpoints: latency-svc-wpgtl [4.060600557s]
Feb  2 11:24:32.409: INFO: Created: latency-svc-wcj2v
Feb  2 11:24:32.409: INFO: Got endpoints: latency-svc-wcj2v [4.484584195s]
Feb  2 11:24:32.578: INFO: Created: latency-svc-dnhfp
Feb  2 11:24:32.645: INFO: Got endpoints: latency-svc-dnhfp [4.672133958s]
Feb  2 11:24:32.831: INFO: Created: latency-svc-hdfgs
Feb  2 11:24:32.848: INFO: Got endpoints: latency-svc-hdfgs [4.708648481s]
Feb  2 11:24:32.877: INFO: Created: latency-svc-n45x4
Feb  2 11:24:32.906: INFO: Got endpoints: latency-svc-n45x4 [4.564935266s]
Feb  2 11:24:33.029: INFO: Created: latency-svc-znnzq
Feb  2 11:24:33.050: INFO: Got endpoints: latency-svc-znnzq [4.522286522s]
Feb  2 11:24:33.104: INFO: Created: latency-svc-7bv59
Feb  2 11:24:33.258: INFO: Got endpoints: latency-svc-7bv59 [4.681761929s]
Feb  2 11:24:33.326: INFO: Created: latency-svc-mwd7z
Feb  2 11:24:33.442: INFO: Created: latency-svc-xlqdd
Feb  2 11:24:33.470: INFO: Got endpoints: latency-svc-mwd7z [4.720214804s]
Feb  2 11:24:33.490: INFO: Got endpoints: latency-svc-xlqdd [4.681082417s]
Feb  2 11:24:33.671: INFO: Created: latency-svc-gpzwt
Feb  2 11:24:33.697: INFO: Got endpoints: latency-svc-gpzwt [3.502735043s]
Feb  2 11:24:33.886: INFO: Created: latency-svc-tcv29
Feb  2 11:24:34.050: INFO: Got endpoints: latency-svc-tcv29 [3.646038096s]
Feb  2 11:24:34.052: INFO: Created: latency-svc-4mhl7
Feb  2 11:24:34.058: INFO: Got endpoints: latency-svc-4mhl7 [3.427584588s]
Feb  2 11:24:34.129: INFO: Created: latency-svc-njjq6
Feb  2 11:24:34.280: INFO: Got endpoints: latency-svc-njjq6 [3.586458403s]
Feb  2 11:24:34.301: INFO: Created: latency-svc-2n4sp
Feb  2 11:24:34.323: INFO: Got endpoints: latency-svc-2n4sp [3.488628772s]
Feb  2 11:24:34.516: INFO: Created: latency-svc-z5cw5
Feb  2 11:24:34.603: INFO: Got endpoints: latency-svc-z5cw5 [3.52419247s]
Feb  2 11:24:34.618: INFO: Created: latency-svc-pvmgr
Feb  2 11:24:34.620: INFO: Got endpoints: latency-svc-pvmgr [2.809169591s]
Feb  2 11:24:34.788: INFO: Created: latency-svc-slh4r
Feb  2 11:24:34.789: INFO: Got endpoints: latency-svc-slh4r [2.37932802s]
Feb  2 11:24:34.848: INFO: Created: latency-svc-7qh6t
Feb  2 11:24:35.156: INFO: Got endpoints: latency-svc-7qh6t [2.5098282s]
Feb  2 11:24:35.193: INFO: Created: latency-svc-ssvw5
Feb  2 11:24:35.367: INFO: Created: latency-svc-th9pc
Feb  2 11:24:35.380: INFO: Got endpoints: latency-svc-ssvw5 [2.531749882s]
Feb  2 11:24:35.392: INFO: Got endpoints: latency-svc-th9pc [2.48541687s]
Feb  2 11:24:35.592: INFO: Created: latency-svc-6t7v9
Feb  2 11:24:35.604: INFO: Got endpoints: latency-svc-6t7v9 [2.553685772s]
Feb  2 11:24:35.684: INFO: Created: latency-svc-jcpq5
Feb  2 11:24:35.782: INFO: Got endpoints: latency-svc-jcpq5 [2.523551046s]
Feb  2 11:24:35.850: INFO: Created: latency-svc-gxtzm
Feb  2 11:24:35.874: INFO: Got endpoints: latency-svc-gxtzm [2.403886986s]
Feb  2 11:24:35.980: INFO: Created: latency-svc-qbh6g
Feb  2 11:24:36.011: INFO: Got endpoints: latency-svc-qbh6g [2.520274377s]
Feb  2 11:24:36.052: INFO: Created: latency-svc-59x99
Feb  2 11:24:36.203: INFO: Got endpoints: latency-svc-59x99 [2.506483414s]
Feb  2 11:24:36.231: INFO: Created: latency-svc-p87lj
Feb  2 11:24:36.250: INFO: Got endpoints: latency-svc-p87lj [2.200095343s]
Feb  2 11:24:36.372: INFO: Created: latency-svc-z98pv
Feb  2 11:24:36.395: INFO: Got endpoints: latency-svc-z98pv [2.336798743s]
Feb  2 11:24:36.453: INFO: Created: latency-svc-nddk6
Feb  2 11:24:36.636: INFO: Got endpoints: latency-svc-nddk6 [2.355328652s]
Feb  2 11:24:36.975: INFO: Created: latency-svc-lf2s8
Feb  2 11:24:37.120: INFO: Got endpoints: latency-svc-lf2s8 [2.797071005s]
Feb  2 11:24:37.184: INFO: Created: latency-svc-jb4sn
Feb  2 11:24:37.390: INFO: Got endpoints: latency-svc-jb4sn [2.786573003s]
Feb  2 11:24:37.404: INFO: Created: latency-svc-8cdzm
Feb  2 11:24:37.418: INFO: Got endpoints: latency-svc-8cdzm [2.797740004s]
Feb  2 11:24:37.465: INFO: Created: latency-svc-tljd5
Feb  2 11:24:37.467: INFO: Got endpoints: latency-svc-tljd5 [2.677927293s]
Feb  2 11:24:37.603: INFO: Created: latency-svc-mnlkm
Feb  2 11:24:37.619: INFO: Got endpoints: latency-svc-mnlkm [2.462844593s]
Feb  2 11:24:37.687: INFO: Created: latency-svc-x6v5m
Feb  2 11:24:37.839: INFO: Got endpoints: latency-svc-x6v5m [2.458726216s]
Feb  2 11:24:37.903: INFO: Created: latency-svc-42z4b
Feb  2 11:24:37.924: INFO: Got endpoints: latency-svc-42z4b [2.531708349s]
Feb  2 11:24:38.063: INFO: Created: latency-svc-klxh2
Feb  2 11:24:38.084: INFO: Got endpoints: latency-svc-klxh2 [2.480705551s]
Feb  2 11:24:38.175: INFO: Created: latency-svc-7lkvt
Feb  2 11:24:38.175: INFO: Got endpoints: latency-svc-7lkvt [2.392475879s]
Feb  2 11:24:38.341: INFO: Created: latency-svc-68p7x
Feb  2 11:24:38.372: INFO: Got endpoints: latency-svc-68p7x [2.4986395s]
Feb  2 11:24:38.532: INFO: Created: latency-svc-twch9
Feb  2 11:24:38.912: INFO: Got endpoints: latency-svc-twch9 [2.901472912s]
Feb  2 11:24:38.917: INFO: Created: latency-svc-l9b7r
Feb  2 11:24:39.328: INFO: Got endpoints: latency-svc-l9b7r [3.124550666s]
Feb  2 11:24:39.391: INFO: Created: latency-svc-zb5qd
Feb  2 11:24:39.629: INFO: Got endpoints: latency-svc-zb5qd [3.378332219s]
Feb  2 11:24:39.678: INFO: Created: latency-svc-kjjd8
Feb  2 11:24:39.725: INFO: Got endpoints: latency-svc-kjjd8 [3.329442819s]
Feb  2 11:24:39.949: INFO: Created: latency-svc-kddrs
Feb  2 11:24:39.981: INFO: Got endpoints: latency-svc-kddrs [3.34487725s]
Feb  2 11:24:40.033: INFO: Created: latency-svc-zbj9q
Feb  2 11:24:40.192: INFO: Got endpoints: latency-svc-zbj9q [3.072125753s]
Feb  2 11:24:40.281: INFO: Created: latency-svc-gwtwx
Feb  2 11:24:40.281: INFO: Got endpoints: latency-svc-gwtwx [2.890784138s]
Feb  2 11:24:40.419: INFO: Created: latency-svc-ncz7d
Feb  2 11:24:40.438: INFO: Got endpoints: latency-svc-ncz7d [3.020078849s]
Feb  2 11:24:40.581: INFO: Created: latency-svc-f5dq8
Feb  2 11:24:40.645: INFO: Got endpoints: latency-svc-f5dq8 [3.178136002s]
Feb  2 11:24:40.839: INFO: Created: latency-svc-wtnjl
Feb  2 11:24:40.979: INFO: Got endpoints: latency-svc-wtnjl [3.360068518s]
Feb  2 11:24:41.054: INFO: Created: latency-svc-flp22
Feb  2 11:24:41.067: INFO: Got endpoints: latency-svc-flp22 [3.22782118s]
Feb  2 11:24:41.170: INFO: Created: latency-svc-wgftb
Feb  2 11:24:41.179: INFO: Got endpoints: latency-svc-wgftb [3.255628942s]
Feb  2 11:24:41.404: INFO: Created: latency-svc-swwlz
Feb  2 11:24:41.423: INFO: Got endpoints: latency-svc-swwlz [3.338532175s]
Feb  2 11:24:41.466: INFO: Created: latency-svc-5h5cr
Feb  2 11:24:41.617: INFO: Got endpoints: latency-svc-5h5cr [3.442021608s]
Feb  2 11:24:41.617: INFO: Latencies: [343.252352ms 405.883718ms 454.974356ms 715.171838ms 965.770959ms 1.015818941s 1.195680636s 1.419850557s 1.648630454s 1.845167911s 1.87324428s 1.974826071s 2.057880684s 2.06921036s 2.097317464s 2.100328011s 2.168011499s 2.171694339s 2.200095343s 2.213888941s 2.226606796s 2.255079336s 2.274777425s 2.275147664s 2.28327499s 2.301087786s 2.301309814s 2.303626522s 2.307311804s 2.311725872s 2.319204571s 2.327429825s 2.333005158s 2.335525644s 2.336798743s 2.340897739s 2.348747364s 2.352915557s 2.355328652s 2.364498399s 2.370802856s 2.37932802s 2.380383726s 2.381333165s 2.388999054s 2.392475879s 2.39698153s 2.397708079s 2.403886986s 2.41641019s 2.419764052s 2.421152292s 2.433792823s 2.438868597s 2.444889804s 2.457163145s 2.458726216s 2.462844593s 2.466135681s 2.467429715s 2.468929566s 2.469343741s 2.469428938s 2.472853925s 2.480705551s 2.481304945s 2.481996078s 2.482628598s 2.48541687s 2.496739347s 2.498308579s 2.4986395s 2.506483414s 2.508433158s 2.509767879s 2.5098282s 2.515591318s 2.520274377s 2.523551046s 2.524604931s 2.52507496s 2.528489165s 2.531708349s 2.531749882s 2.534056335s 2.535174708s 2.547339443s 2.553685772s 2.556552062s 2.559176408s 2.55923659s 2.568629929s 2.56903364s 2.569995456s 2.572519783s 2.573411788s 2.584616909s 2.58614298s 2.599390503s 2.609991517s 2.617745354s 2.621797485s 2.628585986s 2.62934648s 2.632007629s 2.632702238s 2.646460074s 2.646707417s 2.663956455s 2.664538679s 2.667376639s 2.677927293s 2.681032112s 2.688538703s 2.690485159s 2.700654279s 2.724885802s 2.734161012s 2.735036405s 2.759955273s 2.783497007s 2.784631305s 2.786417605s 2.786573003s 2.796397094s 2.797071005s 2.797740004s 2.805181823s 2.809169591s 2.816605005s 2.826274595s 2.827679899s 2.843077621s 2.846665353s 2.875050399s 2.878186484s 2.879182621s 2.885335638s 2.890784138s 2.897662769s 2.901472912s 2.92001094s 2.920176037s 2.937616671s 2.96576318s 2.966156914s 2.977388323s 3.020078849s 3.068292516s 3.072125753s 3.081153431s 3.124550666s 3.137228169s 3.178136002s 3.194748312s 3.22782118s 3.255628942s 3.262735534s 3.329442819s 3.338532175s 3.34487725s 3.360068518s 3.378332219s 3.427584588s 3.442021608s 3.465434899s 3.469319619s 3.477869829s 3.488628772s 3.502735043s 3.52419247s 3.526612334s 3.547978414s 3.557116683s 3.586458403s 3.646038096s 4.060600557s 4.305898562s 4.368874883s 4.484584195s 4.522286522s 4.559834165s 4.564935266s 4.672133958s 4.681082417s 4.681761929s 4.708648481s 4.720214804s 4.848096709s 4.885128236s 4.904618688s 4.917627514s 4.93238894s 4.965992906s 4.968811853s 5.046069805s 5.076094489s 5.111652277s 5.161231576s 5.227714839s]
Feb  2 11:24:41.617: INFO: 50 %ile: 2.617745354s
Feb  2 11:24:41.617: INFO: 90 %ile: 4.522286522s
Feb  2 11:24:41.617: INFO: 99 %ile: 5.161231576s
Feb  2 11:24:41.617: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:24:41.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-5kxdx" for this suite.
Feb  2 11:25:37.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:25:37.889: INFO: namespace: e2e-tests-svc-latency-5kxdx, resource: bindings, ignored listing per whitelist
Feb  2 11:25:37.937: INFO: namespace e2e-tests-svc-latency-5kxdx deletion completed in 56.298347266s

• [SLOW TEST:105.516 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:25:37.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:25:45.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-597vm" for this suite.
Feb  2 11:25:51.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:25:51.800: INFO: namespace: e2e-tests-namespaces-597vm, resource: bindings, ignored listing per whitelist
Feb  2 11:25:51.943: INFO: namespace e2e-tests-namespaces-597vm deletion completed in 6.23003474s
STEP: Destroying namespace "e2e-tests-nsdeletetest-7bmcl" for this suite.
Feb  2 11:25:51.946: INFO: Namespace e2e-tests-nsdeletetest-7bmcl was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-wr4wg" for this suite.
Feb  2 11:25:58.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:25:58.120: INFO: namespace: e2e-tests-nsdeletetest-wr4wg, resource: bindings, ignored listing per whitelist
Feb  2 11:25:58.187: INFO: namespace e2e-tests-nsdeletetest-wr4wg deletion completed in 6.240790448s

• [SLOW TEST:20.249 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:25:58.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Feb  2 11:25:58.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gf7r6'
Feb  2 11:26:00.342: INFO: stderr: ""
Feb  2 11:26:00.342: INFO: stdout: "pod/pause created\n"
Feb  2 11:26:00.342: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  2 11:26:00.342: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-gf7r6" to be "running and ready"
Feb  2 11:26:00.390: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 47.723049ms
Feb  2 11:26:02.409: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067111531s
Feb  2 11:26:04.441: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09842343s
Feb  2 11:26:06.463: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120851781s
Feb  2 11:26:08.489: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.146393149s
Feb  2 11:26:08.489: INFO: Pod "pause" satisfied condition "running and ready"
Feb  2 11:26:08.489: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  2 11:26:08.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-gf7r6'
Feb  2 11:26:08.735: INFO: stderr: ""
Feb  2 11:26:08.735: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  2 11:26:08.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-gf7r6'
Feb  2 11:26:08.865: INFO: stderr: ""
Feb  2 11:26:08.866: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  2 11:26:08.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-gf7r6'
Feb  2 11:26:09.065: INFO: stderr: ""
Feb  2 11:26:09.065: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  2 11:26:09.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-gf7r6'
Feb  2 11:26:09.276: INFO: stderr: ""
Feb  2 11:26:09.276: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Feb  2 11:26:09.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gf7r6'
Feb  2 11:26:09.476: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 11:26:09.476: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  2 11:26:09.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-gf7r6'
Feb  2 11:26:09.635: INFO: stderr: "No resources found.\n"
Feb  2 11:26:09.635: INFO: stdout: ""
Feb  2 11:26:09.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-gf7r6 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  2 11:26:09.772: INFO: stderr: ""
Feb  2 11:26:09.772: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:26:09.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gf7r6" for this suite.
Feb  2 11:26:16.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:26:16.498: INFO: namespace: e2e-tests-kubectl-gf7r6, resource: bindings, ignored listing per whitelist
Feb  2 11:26:16.707: INFO: namespace e2e-tests-kubectl-gf7r6 deletion completed in 6.924686596s

• [SLOW TEST:18.520 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:26:16.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:26:17.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-g5q2s" for this suite.
Feb  2 11:26:23.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:26:23.261: INFO: namespace: e2e-tests-kubelet-test-g5q2s, resource: bindings, ignored listing per whitelist
Feb  2 11:26:23.270: INFO: namespace e2e-tests-kubelet-test-g5q2s deletion completed in 6.119803993s

• [SLOW TEST:6.561 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:26:23.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 11:26:23.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-xc8qr'
Feb  2 11:26:23.676: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  2 11:26:23.676: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Feb  2 11:26:27.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-xc8qr'
Feb  2 11:26:28.052: INFO: stderr: ""
Feb  2 11:26:28.053: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:26:28.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xc8qr" for this suite.
Feb  2 11:26:34.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:26:34.181: INFO: namespace: e2e-tests-kubectl-xc8qr, resource: bindings, ignored listing per whitelist
Feb  2 11:26:34.357: INFO: namespace e2e-tests-kubectl-xc8qr deletion completed in 6.290184399s

• [SLOW TEST:11.087 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:26:34.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 11:26:34.661: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de6549f5-45ae-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-9ztpm" to be "success or failure"
Feb  2 11:26:34.676: INFO: Pod "downwardapi-volume-de6549f5-45ae-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.470752ms
Feb  2 11:26:36.687: INFO: Pod "downwardapi-volume-de6549f5-45ae-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025817947s
Feb  2 11:26:38.717: INFO: Pod "downwardapi-volume-de6549f5-45ae-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055912533s
Feb  2 11:26:40.776: INFO: Pod "downwardapi-volume-de6549f5-45ae-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114834198s
Feb  2 11:26:42.801: INFO: Pod "downwardapi-volume-de6549f5-45ae-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.139438626s
STEP: Saw pod success
Feb  2 11:26:42.801: INFO: Pod "downwardapi-volume-de6549f5-45ae-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:26:42.872: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-de6549f5-45ae-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 11:26:43.221: INFO: Waiting for pod downwardapi-volume-de6549f5-45ae-11ea-8b99-0242ac110005 to disappear
Feb  2 11:26:43.255: INFO: Pod downwardapi-volume-de6549f5-45ae-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:26:43.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9ztpm" for this suite.
Feb  2 11:26:51.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:26:51.416: INFO: namespace: e2e-tests-projected-9ztpm, resource: bindings, ignored listing per whitelist
Feb  2 11:26:51.497: INFO: namespace e2e-tests-projected-9ztpm deletion completed in 8.227343981s

• [SLOW TEST:17.137 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:26:51.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0202 11:27:01.831423       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  2 11:27:01.831: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:27:01.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-qx7cd" for this suite.
Feb  2 11:27:07.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:27:08.043: INFO: namespace: e2e-tests-gc-qx7cd, resource: bindings, ignored listing per whitelist
Feb  2 11:27:08.065: INFO: namespace e2e-tests-gc-qx7cd deletion completed in 6.224869103s

• [SLOW TEST:16.566 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:27:08.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  2 11:27:08.294: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:27:22.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-psp9b" for this suite.
Feb  2 11:27:30.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:27:30.921: INFO: namespace: e2e-tests-init-container-psp9b, resource: bindings, ignored listing per whitelist
Feb  2 11:27:31.016: INFO: namespace e2e-tests-init-container-psp9b deletion completed in 8.292774133s

• [SLOW TEST:22.951 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:27:31.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 11:27:31.270: INFO: Waiting up to 5m0s for pod "downwardapi-volume-002967d5-45af-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-cj486" to be "success or failure"
Feb  2 11:27:31.280: INFO: Pod "downwardapi-volume-002967d5-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.262084ms
Feb  2 11:27:33.290: INFO: Pod "downwardapi-volume-002967d5-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019873849s
Feb  2 11:27:35.303: INFO: Pod "downwardapi-volume-002967d5-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032353537s
Feb  2 11:27:37.317: INFO: Pod "downwardapi-volume-002967d5-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047139184s
Feb  2 11:27:39.347: INFO: Pod "downwardapi-volume-002967d5-45af-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077241377s
STEP: Saw pod success
Feb  2 11:27:39.348: INFO: Pod "downwardapi-volume-002967d5-45af-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:27:39.374: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-002967d5-45af-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 11:27:39.481: INFO: Waiting for pod downwardapi-volume-002967d5-45af-11ea-8b99-0242ac110005 to disappear
Feb  2 11:27:39.509: INFO: Pod downwardapi-volume-002967d5-45af-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:27:39.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-cj486" for this suite.
Feb  2 11:27:45.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:27:45.768: INFO: namespace: e2e-tests-downward-api-cj486, resource: bindings, ignored listing per whitelist
Feb  2 11:27:45.827: INFO: namespace e2e-tests-downward-api-cj486 deletion completed in 6.291258189s

• [SLOW TEST:14.810 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:27:45.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  2 11:27:45.978: INFO: Waiting up to 5m0s for pod "pod-08ee8eeb-45af-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-npr27" to be "success or failure"
Feb  2 11:27:46.053: INFO: Pod "pod-08ee8eeb-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 74.215737ms
Feb  2 11:27:48.069: INFO: Pod "pod-08ee8eeb-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090597719s
Feb  2 11:27:50.322: INFO: Pod "pod-08ee8eeb-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343677766s
Feb  2 11:27:52.353: INFO: Pod "pod-08ee8eeb-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.374705828s
Feb  2 11:27:54.364: INFO: Pod "pod-08ee8eeb-45af-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.38547891s
STEP: Saw pod success
Feb  2 11:27:54.364: INFO: Pod "pod-08ee8eeb-45af-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:27:54.382: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-08ee8eeb-45af-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 11:27:54.455: INFO: Waiting for pod pod-08ee8eeb-45af-11ea-8b99-0242ac110005 to disappear
Feb  2 11:27:54.524: INFO: Pod pod-08ee8eeb-45af-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:27:54.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-npr27" for this suite.
Feb  2 11:28:00.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:28:00.738: INFO: namespace: e2e-tests-emptydir-npr27, resource: bindings, ignored listing per whitelist
Feb  2 11:28:00.828: INFO: namespace e2e-tests-emptydir-npr27 deletion completed in 6.2834277s

• [SLOW TEST:15.001 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:28:00.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-11e5af99-45af-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 11:28:01.022: INFO: Waiting up to 5m0s for pod "pod-secrets-11e651db-45af-11ea-8b99-0242ac110005" in namespace "e2e-tests-secrets-vtx6m" to be "success or failure"
Feb  2 11:28:01.029: INFO: Pod "pod-secrets-11e651db-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.368921ms
Feb  2 11:28:03.058: INFO: Pod "pod-secrets-11e651db-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036046115s
Feb  2 11:28:05.068: INFO: Pod "pod-secrets-11e651db-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04649942s
Feb  2 11:28:07.095: INFO: Pod "pod-secrets-11e651db-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073093013s
Feb  2 11:28:09.110: INFO: Pod "pod-secrets-11e651db-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087874877s
Feb  2 11:28:11.123: INFO: Pod "pod-secrets-11e651db-45af-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100937798s
STEP: Saw pod success
Feb  2 11:28:11.123: INFO: Pod "pod-secrets-11e651db-45af-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:28:11.128: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-11e651db-45af-11ea-8b99-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  2 11:28:11.686: INFO: Waiting for pod pod-secrets-11e651db-45af-11ea-8b99-0242ac110005 to disappear
Feb  2 11:28:11.746: INFO: Pod pod-secrets-11e651db-45af-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:28:11.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-vtx6m" for this suite.
Feb  2 11:28:17.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:28:17.926: INFO: namespace: e2e-tests-secrets-vtx6m, resource: bindings, ignored listing per whitelist
Feb  2 11:28:17.976: INFO: namespace e2e-tests-secrets-vtx6m deletion completed in 6.216387339s

• [SLOW TEST:17.148 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:28:17.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  2 11:28:18.191: INFO: Waiting up to 5m0s for pod "pod-1c20ab83-45af-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-xtcnq" to be "success or failure"
Feb  2 11:28:18.205: INFO: Pod "pod-1c20ab83-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.101663ms
Feb  2 11:28:20.230: INFO: Pod "pod-1c20ab83-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039694704s
Feb  2 11:28:22.269: INFO: Pod "pod-1c20ab83-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078048761s
Feb  2 11:28:24.289: INFO: Pod "pod-1c20ab83-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098425519s
Feb  2 11:28:26.316: INFO: Pod "pod-1c20ab83-45af-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125083518s
STEP: Saw pod success
Feb  2 11:28:26.316: INFO: Pod "pod-1c20ab83-45af-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:28:26.324: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1c20ab83-45af-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 11:28:26.452: INFO: Waiting for pod pod-1c20ab83-45af-11ea-8b99-0242ac110005 to disappear
Feb  2 11:28:26.464: INFO: Pod pod-1c20ab83-45af-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:28:26.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xtcnq" for this suite.
Feb  2 11:28:32.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:28:32.680: INFO: namespace: e2e-tests-emptydir-xtcnq, resource: bindings, ignored listing per whitelist
Feb  2 11:28:32.839: INFO: namespace e2e-tests-emptydir-xtcnq deletion completed in 6.350784685s

• [SLOW TEST:14.863 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:28:32.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:28:32.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-5xqgl" for this suite.
Feb  2 11:28:39.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:28:39.172: INFO: namespace: e2e-tests-services-5xqgl, resource: bindings, ignored listing per whitelist
Feb  2 11:28:39.222: INFO: namespace e2e-tests-services-5xqgl deletion completed in 6.234292441s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.383 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:28:39.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  2 11:28:39.474: INFO: Waiting up to 5m0s for pod "downward-api-28d1668c-45af-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-chwgv" to be "success or failure"
Feb  2 11:28:39.493: INFO: Pod "downward-api-28d1668c-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.806384ms
Feb  2 11:28:41.508: INFO: Pod "downward-api-28d1668c-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033339751s
Feb  2 11:28:43.520: INFO: Pod "downward-api-28d1668c-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045269469s
Feb  2 11:28:46.312: INFO: Pod "downward-api-28d1668c-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.837652876s
Feb  2 11:28:48.353: INFO: Pod "downward-api-28d1668c-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.877966724s
Feb  2 11:28:50.367: INFO: Pod "downward-api-28d1668c-45af-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.892495907s
STEP: Saw pod success
Feb  2 11:28:50.367: INFO: Pod "downward-api-28d1668c-45af-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:28:50.371: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-28d1668c-45af-11ea-8b99-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  2 11:28:50.927: INFO: Waiting for pod downward-api-28d1668c-45af-11ea-8b99-0242ac110005 to disappear
Feb  2 11:28:51.112: INFO: Pod downward-api-28d1668c-45af-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:28:51.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-chwgv" for this suite.
Feb  2 11:28:57.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:28:57.319: INFO: namespace: e2e-tests-downward-api-chwgv, resource: bindings, ignored listing per whitelist
Feb  2 11:28:57.378: INFO: namespace e2e-tests-downward-api-chwgv deletion completed in 6.251209041s

• [SLOW TEST:18.156 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:28:57.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  2 11:28:57.611: INFO: Waiting up to 5m0s for pod "pod-339e5ae2-45af-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-82fsr" to be "success or failure"
Feb  2 11:28:57.634: INFO: Pod "pod-339e5ae2-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.142482ms
Feb  2 11:28:59.661: INFO: Pod "pod-339e5ae2-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050174516s
Feb  2 11:29:01.673: INFO: Pod "pod-339e5ae2-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06204108s
Feb  2 11:29:03.714: INFO: Pod "pod-339e5ae2-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102557733s
Feb  2 11:29:05.745: INFO: Pod "pod-339e5ae2-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133871459s
Feb  2 11:29:07.784: INFO: Pod "pod-339e5ae2-45af-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.173004019s
STEP: Saw pod success
Feb  2 11:29:07.784: INFO: Pod "pod-339e5ae2-45af-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:29:07.798: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-339e5ae2-45af-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 11:29:08.348: INFO: Waiting for pod pod-339e5ae2-45af-11ea-8b99-0242ac110005 to disappear
Feb  2 11:29:08.400: INFO: Pod pod-339e5ae2-45af-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:29:08.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-82fsr" for this suite.
Feb  2 11:29:14.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:29:14.792: INFO: namespace: e2e-tests-emptydir-82fsr, resource: bindings, ignored listing per whitelist
Feb  2 11:29:14.795: INFO: namespace e2e-tests-emptydir-82fsr deletion completed in 6.383461749s

• [SLOW TEST:17.417 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:29:14.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-3df766ec-45af-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 11:29:15.262: INFO: Waiting up to 5m0s for pod "pod-secrets-3e19ff8e-45af-11ea-8b99-0242ac110005" in namespace "e2e-tests-secrets-2nwhk" to be "success or failure"
Feb  2 11:29:15.267: INFO: Pod "pod-secrets-3e19ff8e-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.851339ms
Feb  2 11:29:17.279: INFO: Pod "pod-secrets-3e19ff8e-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016785402s
Feb  2 11:29:19.290: INFO: Pod "pod-secrets-3e19ff8e-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028304793s
Feb  2 11:29:21.307: INFO: Pod "pod-secrets-3e19ff8e-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044904378s
Feb  2 11:29:23.328: INFO: Pod "pod-secrets-3e19ff8e-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065844574s
Feb  2 11:29:25.345: INFO: Pod "pod-secrets-3e19ff8e-45af-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083534108s
STEP: Saw pod success
Feb  2 11:29:25.345: INFO: Pod "pod-secrets-3e19ff8e-45af-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:29:25.354: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-3e19ff8e-45af-11ea-8b99-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  2 11:29:25.407: INFO: Waiting for pod pod-secrets-3e19ff8e-45af-11ea-8b99-0242ac110005 to disappear
Feb  2 11:29:25.445: INFO: Pod pod-secrets-3e19ff8e-45af-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:29:25.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2nwhk" for this suite.
Feb  2 11:29:31.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:29:31.686: INFO: namespace: e2e-tests-secrets-2nwhk, resource: bindings, ignored listing per whitelist
Feb  2 11:29:31.776: INFO: namespace e2e-tests-secrets-2nwhk deletion completed in 6.318379798s
STEP: Destroying namespace "e2e-tests-secret-namespace-tfqmj" for this suite.
Feb  2 11:29:37.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:29:38.095: INFO: namespace: e2e-tests-secret-namespace-tfqmj, resource: bindings, ignored listing per whitelist
Feb  2 11:29:38.095: INFO: namespace e2e-tests-secret-namespace-tfqmj deletion completed in 6.318250545s

• [SLOW TEST:23.300 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:29:38.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  2 11:29:38.470: INFO: Waiting up to 5m0s for pod "pod-4bedb39e-45af-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-8qv5c" to be "success or failure"
Feb  2 11:29:38.521: INFO: Pod "pod-4bedb39e-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.537977ms
Feb  2 11:29:40.584: INFO: Pod "pod-4bedb39e-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113469977s
Feb  2 11:29:42.634: INFO: Pod "pod-4bedb39e-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163548877s
Feb  2 11:29:44.645: INFO: Pod "pod-4bedb39e-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175026291s
Feb  2 11:29:46.668: INFO: Pod "pod-4bedb39e-45af-11ea-8b99-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.197979316s
Feb  2 11:29:48.688: INFO: Pod "pod-4bedb39e-45af-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.217581427s
STEP: Saw pod success
Feb  2 11:29:48.688: INFO: Pod "pod-4bedb39e-45af-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:29:48.700: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4bedb39e-45af-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 11:29:48.818: INFO: Waiting for pod pod-4bedb39e-45af-11ea-8b99-0242ac110005 to disappear
Feb  2 11:29:48.832: INFO: Pod pod-4bedb39e-45af-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:29:48.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8qv5c" for this suite.
Feb  2 11:29:54.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:29:55.048: INFO: namespace: e2e-tests-emptydir-8qv5c, resource: bindings, ignored listing per whitelist
Feb  2 11:29:55.138: INFO: namespace e2e-tests-emptydir-8qv5c deletion completed in 6.286952314s

• [SLOW TEST:17.043 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:29:55.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 11:29:55.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-j6pxh'
Feb  2 11:29:55.489: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  2 11:29:55.489: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  2 11:29:55.526: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-s9tm5]
Feb  2 11:29:55.526: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-s9tm5" in namespace "e2e-tests-kubectl-j6pxh" to be "running and ready"
Feb  2 11:29:55.692: INFO: Pod "e2e-test-nginx-rc-s9tm5": Phase="Pending", Reason="", readiness=false. Elapsed: 165.88161ms
Feb  2 11:29:57.705: INFO: Pod "e2e-test-nginx-rc-s9tm5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178356516s
Feb  2 11:29:59.897: INFO: Pod "e2e-test-nginx-rc-s9tm5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370185811s
Feb  2 11:30:01.904: INFO: Pod "e2e-test-nginx-rc-s9tm5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377303659s
Feb  2 11:30:03.931: INFO: Pod "e2e-test-nginx-rc-s9tm5": Phase="Running", Reason="", readiness=true. Elapsed: 8.404572394s
Feb  2 11:30:03.931: INFO: Pod "e2e-test-nginx-rc-s9tm5" satisfied condition "running and ready"
Feb  2 11:30:03.931: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-s9tm5]
Feb  2 11:30:03.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-j6pxh'
Feb  2 11:30:04.215: INFO: stderr: ""
Feb  2 11:30:04.215: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb  2 11:30:04.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-j6pxh'
Feb  2 11:30:04.352: INFO: stderr: ""
Feb  2 11:30:04.352: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:30:04.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-j6pxh" for this suite.
Feb  2 11:30:26.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:30:27.044: INFO: namespace: e2e-tests-kubectl-j6pxh, resource: bindings, ignored listing per whitelist
Feb  2 11:30:27.098: INFO: namespace e2e-tests-kubectl-j6pxh deletion completed in 22.713943265s

• [SLOW TEST:31.960 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:30:27.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  2 11:30:27.381: INFO: Waiting up to 5m0s for pod "pod-691a9cc4-45af-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-xjwv6" to be "success or failure"
Feb  2 11:30:27.388: INFO: Pod "pod-691a9cc4-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.066968ms
Feb  2 11:30:29.407: INFO: Pod "pod-691a9cc4-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026107545s
Feb  2 11:30:31.425: INFO: Pod "pod-691a9cc4-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04425848s
Feb  2 11:30:34.394: INFO: Pod "pod-691a9cc4-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.013510421s
Feb  2 11:30:36.428: INFO: Pod "pod-691a9cc4-45af-11ea-8b99-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.047393566s
Feb  2 11:30:38.442: INFO: Pod "pod-691a9cc4-45af-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.060684373s
STEP: Saw pod success
Feb  2 11:30:38.442: INFO: Pod "pod-691a9cc4-45af-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:30:38.449: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-691a9cc4-45af-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 11:30:38.804: INFO: Waiting for pod pod-691a9cc4-45af-11ea-8b99-0242ac110005 to disappear
Feb  2 11:30:38.818: INFO: Pod pod-691a9cc4-45af-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:30:38.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xjwv6" for this suite.
Feb  2 11:30:44.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:30:45.099: INFO: namespace: e2e-tests-emptydir-xjwv6, resource: bindings, ignored listing per whitelist
Feb  2 11:30:45.111: INFO: namespace e2e-tests-emptydir-xjwv6 deletion completed in 6.276039127s

• [SLOW TEST:18.012 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:30:45.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-sc5f8
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  2 11:30:45.320: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  2 11:31:21.611: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-sc5f8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 11:31:21.612: INFO: >>> kubeConfig: /root/.kube/config
I0202 11:31:21.710695       9 log.go:172] (0xc001d9c210) (0xc000d430e0) Create stream
I0202 11:31:21.710986       9 log.go:172] (0xc001d9c210) (0xc000d430e0) Stream added, broadcasting: 1
I0202 11:31:21.716656       9 log.go:172] (0xc001d9c210) Reply frame received for 1
I0202 11:31:21.716697       9 log.go:172] (0xc001d9c210) (0xc0025ae0a0) Create stream
I0202 11:31:21.716716       9 log.go:172] (0xc001d9c210) (0xc0025ae0a0) Stream added, broadcasting: 3
I0202 11:31:21.717972       9 log.go:172] (0xc001d9c210) Reply frame received for 3
I0202 11:31:21.717999       9 log.go:172] (0xc001d9c210) (0xc000d43180) Create stream
I0202 11:31:21.718009       9 log.go:172] (0xc001d9c210) (0xc000d43180) Stream added, broadcasting: 5
I0202 11:31:21.719023       9 log.go:172] (0xc001d9c210) Reply frame received for 5
I0202 11:31:21.932555       9 log.go:172] (0xc001d9c210) Data frame received for 3
I0202 11:31:21.932609       9 log.go:172] (0xc0025ae0a0) (3) Data frame handling
I0202 11:31:21.932633       9 log.go:172] (0xc0025ae0a0) (3) Data frame sent
I0202 11:31:22.122030       9 log.go:172] (0xc001d9c210) Data frame received for 1
I0202 11:31:22.122075       9 log.go:172] (0xc000d430e0) (1) Data frame handling
I0202 11:31:22.122110       9 log.go:172] (0xc000d430e0) (1) Data frame sent
I0202 11:31:22.122805       9 log.go:172] (0xc001d9c210) (0xc000d430e0) Stream removed, broadcasting: 1
I0202 11:31:22.123626       9 log.go:172] (0xc001d9c210) (0xc000d43180) Stream removed, broadcasting: 5
I0202 11:31:22.123677       9 log.go:172] (0xc001d9c210) (0xc0025ae0a0) Stream removed, broadcasting: 3
I0202 11:31:22.123708       9 log.go:172] (0xc001d9c210) (0xc000d430e0) Stream removed, broadcasting: 1
I0202 11:31:22.123717       9 log.go:172] (0xc001d9c210) (0xc0025ae0a0) Stream removed, broadcasting: 3
I0202 11:31:22.123725       9 log.go:172] (0xc001d9c210) (0xc000d43180) Stream removed, broadcasting: 5
I0202 11:31:22.123835       9 log.go:172] (0xc001d9c210) Go away received
Feb  2 11:31:22.124: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:31:22.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-sc5f8" for this suite.
Feb  2 11:31:46.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:31:46.309: INFO: namespace: e2e-tests-pod-network-test-sc5f8, resource: bindings, ignored listing per whitelist
Feb  2 11:31:46.392: INFO: namespace e2e-tests-pod-network-test-sc5f8 deletion completed in 24.233841368s

• [SLOW TEST:61.281 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:31:46.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  2 11:31:46.705: INFO: Number of nodes with available pods: 0
Feb  2 11:31:46.705: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:31:47.726: INFO: Number of nodes with available pods: 0
Feb  2 11:31:47.726: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:31:48.751: INFO: Number of nodes with available pods: 0
Feb  2 11:31:48.752: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:31:49.726: INFO: Number of nodes with available pods: 0
Feb  2 11:31:49.726: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:31:50.745: INFO: Number of nodes with available pods: 0
Feb  2 11:31:50.745: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:31:53.445: INFO: Number of nodes with available pods: 0
Feb  2 11:31:53.445: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:31:53.776: INFO: Number of nodes with available pods: 0
Feb  2 11:31:53.776: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:31:54.734: INFO: Number of nodes with available pods: 0
Feb  2 11:31:54.734: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:31:55.771: INFO: Number of nodes with available pods: 1
Feb  2 11:31:55.771: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  2 11:31:55.835: INFO: Number of nodes with available pods: 0
Feb  2 11:31:55.835: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:31:56.865: INFO: Number of nodes with available pods: 0
Feb  2 11:31:56.865: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:31:57.895: INFO: Number of nodes with available pods: 0
Feb  2 11:31:57.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:31:58.881: INFO: Number of nodes with available pods: 0
Feb  2 11:31:58.881: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:00.113: INFO: Number of nodes with available pods: 0
Feb  2 11:32:00.113: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:00.867: INFO: Number of nodes with available pods: 0
Feb  2 11:32:00.867: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:01.861: INFO: Number of nodes with available pods: 0
Feb  2 11:32:01.861: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:02.864: INFO: Number of nodes with available pods: 0
Feb  2 11:32:02.864: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:03.865: INFO: Number of nodes with available pods: 0
Feb  2 11:32:03.865: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:04.863: INFO: Number of nodes with available pods: 0
Feb  2 11:32:04.863: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:05.859: INFO: Number of nodes with available pods: 0
Feb  2 11:32:05.859: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:06.864: INFO: Number of nodes with available pods: 0
Feb  2 11:32:06.864: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:07.885: INFO: Number of nodes with available pods: 0
Feb  2 11:32:07.885: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:08.882: INFO: Number of nodes with available pods: 0
Feb  2 11:32:08.882: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:09.867: INFO: Number of nodes with available pods: 0
Feb  2 11:32:09.867: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:10.881: INFO: Number of nodes with available pods: 0
Feb  2 11:32:10.881: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:11.852: INFO: Number of nodes with available pods: 0
Feb  2 11:32:11.852: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:12.901: INFO: Number of nodes with available pods: 0
Feb  2 11:32:12.901: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:14.174: INFO: Number of nodes with available pods: 0
Feb  2 11:32:14.174: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:14.888: INFO: Number of nodes with available pods: 0
Feb  2 11:32:14.888: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:15.890: INFO: Number of nodes with available pods: 0
Feb  2 11:32:15.891: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:16.867: INFO: Number of nodes with available pods: 0
Feb  2 11:32:16.867: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:18.193: INFO: Number of nodes with available pods: 0
Feb  2 11:32:18.193: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:18.861: INFO: Number of nodes with available pods: 0
Feb  2 11:32:18.861: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:32:19.922: INFO: Number of nodes with available pods: 1
Feb  2 11:32:19.922: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-ltcv6, will wait for the garbage collector to delete the pods
Feb  2 11:32:20.009: INFO: Deleting DaemonSet.extensions daemon-set took: 21.038656ms
Feb  2 11:32:20.109: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.571273ms
Feb  2 11:32:32.625: INFO: Number of nodes with available pods: 0
Feb  2 11:32:32.625: INFO: Number of running nodes: 0, number of available pods: 0
Feb  2 11:32:32.631: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-ltcv6/daemonsets","resourceVersion":"20299601"},"items":null}

Feb  2 11:32:32.641: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-ltcv6/pods","resourceVersion":"20299601"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:32:32.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-ltcv6" for this suite.
Feb  2 11:32:38.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:32:38.996: INFO: namespace: e2e-tests-daemonsets-ltcv6, resource: bindings, ignored listing per whitelist
Feb  2 11:32:39.069: INFO: namespace e2e-tests-daemonsets-ltcv6 deletion completed in 6.377390471s

• [SLOW TEST:52.676 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:32:39.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  2 11:32:39.244: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  2 11:32:39.265: INFO: Waiting for terminating namespaces to be deleted...
Feb  2 11:32:39.269: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  2 11:32:39.288: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  2 11:32:39.288: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  2 11:32:39.288: INFO: 	Container weave ready: true, restart count 0
Feb  2 11:32:39.288: INFO: 	Container weave-npc ready: true, restart count 0
Feb  2 11:32:39.288: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  2 11:32:39.288: INFO: 	Container coredns ready: true, restart count 0
Feb  2 11:32:39.288: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  2 11:32:39.288: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  2 11:32:39.288: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  2 11:32:39.288: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  2 11:32:39.288: INFO: 	Container coredns ready: true, restart count 0
Feb  2 11:32:39.288: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  2 11:32:39.288: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-bdef0a04-45af-11ea-8b99-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-bdef0a04-45af-11ea-8b99-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-bdef0a04-45af-11ea-8b99-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:32:59.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-hhk5w" for this suite.
Feb  2 11:33:13.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:33:14.135: INFO: namespace: e2e-tests-sched-pred-hhk5w, resource: bindings, ignored listing per whitelist
Feb  2 11:33:14.191: INFO: namespace e2e-tests-sched-pred-hhk5w deletion completed in 14.244616345s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:35.122 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:33:14.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 11:33:14.434: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ccb4e375-45af-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-4wmx5" to be "success or failure"
Feb  2 11:33:14.515: INFO: Pod "downwardapi-volume-ccb4e375-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 81.397853ms
Feb  2 11:33:16.552: INFO: Pod "downwardapi-volume-ccb4e375-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118118751s
Feb  2 11:33:18.591: INFO: Pod "downwardapi-volume-ccb4e375-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157784749s
Feb  2 11:33:20.627: INFO: Pod "downwardapi-volume-ccb4e375-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192994821s
Feb  2 11:33:22.644: INFO: Pod "downwardapi-volume-ccb4e375-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21000364s
Feb  2 11:33:24.720: INFO: Pod "downwardapi-volume-ccb4e375-45af-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.285955081s
STEP: Saw pod success
Feb  2 11:33:24.720: INFO: Pod "downwardapi-volume-ccb4e375-45af-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:33:24.726: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ccb4e375-45af-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 11:33:25.184: INFO: Waiting for pod downwardapi-volume-ccb4e375-45af-11ea-8b99-0242ac110005 to disappear
Feb  2 11:33:25.191: INFO: Pod downwardapi-volume-ccb4e375-45af-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:33:25.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4wmx5" for this suite.
Feb  2 11:33:31.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:33:31.918: INFO: namespace: e2e-tests-downward-api-4wmx5, resource: bindings, ignored listing per whitelist
Feb  2 11:33:31.926: INFO: namespace e2e-tests-downward-api-4wmx5 deletion completed in 6.727146132s

• [SLOW TEST:17.734 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:33:31.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Feb  2 11:33:32.115: INFO: Waiting up to 5m0s for pod "client-containers-d73ea42c-45af-11ea-8b99-0242ac110005" in namespace "e2e-tests-containers-j6bwx" to be "success or failure"
Feb  2 11:33:32.125: INFO: Pod "client-containers-d73ea42c-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.358413ms
Feb  2 11:33:34.137: INFO: Pod "client-containers-d73ea42c-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021435947s
Feb  2 11:33:36.152: INFO: Pod "client-containers-d73ea42c-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036908853s
Feb  2 11:33:38.196: INFO: Pod "client-containers-d73ea42c-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080844211s
Feb  2 11:33:40.491: INFO: Pod "client-containers-d73ea42c-45af-11ea-8b99-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.375660446s
Feb  2 11:33:42.514: INFO: Pod "client-containers-d73ea42c-45af-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.398712583s
STEP: Saw pod success
Feb  2 11:33:42.514: INFO: Pod "client-containers-d73ea42c-45af-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:33:42.524: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-d73ea42c-45af-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 11:33:43.493: INFO: Waiting for pod client-containers-d73ea42c-45af-11ea-8b99-0242ac110005 to disappear
Feb  2 11:33:43.498: INFO: Pod client-containers-d73ea42c-45af-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:33:43.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-j6bwx" for this suite.
Feb  2 11:33:49.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:33:49.681: INFO: namespace: e2e-tests-containers-j6bwx, resource: bindings, ignored listing per whitelist
Feb  2 11:33:49.683: INFO: namespace e2e-tests-containers-j6bwx deletion completed in 6.180261201s

• [SLOW TEST:17.758 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:33:49.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  2 11:33:50.018: INFO: Waiting up to 5m0s for pod "downward-api-e1eac315-45af-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-rvxmw" to be "success or failure"
Feb  2 11:33:50.030: INFO: Pod "downward-api-e1eac315-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.841493ms
Feb  2 11:33:52.042: INFO: Pod "downward-api-e1eac315-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023315822s
Feb  2 11:33:54.065: INFO: Pod "downward-api-e1eac315-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046640029s
Feb  2 11:33:56.082: INFO: Pod "downward-api-e1eac315-45af-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06395675s
Feb  2 11:33:58.301: INFO: Pod "downward-api-e1eac315-45af-11ea-8b99-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.282307167s
Feb  2 11:34:00.325: INFO: Pod "downward-api-e1eac315-45af-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.306325453s
STEP: Saw pod success
Feb  2 11:34:00.325: INFO: Pod "downward-api-e1eac315-45af-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:34:00.331: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-e1eac315-45af-11ea-8b99-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  2 11:34:00.946: INFO: Waiting for pod downward-api-e1eac315-45af-11ea-8b99-0242ac110005 to disappear
Feb  2 11:34:00.961: INFO: Pod downward-api-e1eac315-45af-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:34:00.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rvxmw" for this suite.
Feb  2 11:34:07.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:34:07.028: INFO: namespace: e2e-tests-downward-api-rvxmw, resource: bindings, ignored listing per whitelist
Feb  2 11:34:07.188: INFO: namespace e2e-tests-downward-api-rvxmw deletion completed in 6.218476345s

• [SLOW TEST:17.504 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:34:07.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 11:34:07.371: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  2 11:34:07.387: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  2 11:34:13.072: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  2 11:34:17.096: INFO: Creating deployment "test-rolling-update-deployment"
Feb  2 11:34:17.120: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  2 11:34:17.153: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  2 11:34:19.172: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  2 11:34:19.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 11:34:21.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 11:34:23.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 11:34:25.193: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240057, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 11:34:27.191: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  2 11:34:27.209: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-7lwtz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7lwtz/deployments/test-rolling-update-deployment,UID:f210c1b8-45af-11ea-a994-fa163e34d433,ResourceVersion:20299915,Generation:1,CreationTimestamp:2020-02-02 11:34:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-02 11:34:17 +0000 UTC 2020-02-02 11:34:17 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-02 11:34:26 +0000 UTC 2020-02-02 11:34:17 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  2 11:34:27.222: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-7lwtz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7lwtz/replicasets/test-rolling-update-deployment-75db98fb4c,UID:f2267ed2-45af-11ea-a994-fa163e34d433,ResourceVersion:20299906,Generation:1,CreationTimestamp:2020-02-02 11:34:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment f210c1b8-45af-11ea-a994-fa163e34d433 0xc0012ccf87 0xc0012ccf88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  2 11:34:27.222: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  2 11:34:27.223: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-7lwtz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7lwtz/replicasets/test-rolling-update-controller,UID:ec4457a1-45af-11ea-a994-fa163e34d433,ResourceVersion:20299914,Generation:2,CreationTimestamp:2020-02-02 11:34:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment f210c1b8-45af-11ea-a994-fa163e34d433 0xc0012ccb2f 0xc0012ccb40}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  2 11:34:27.232: INFO: Pod "test-rolling-update-deployment-75db98fb4c-5km72" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-5km72,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-7lwtz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7lwtz/pods/test-rolling-update-deployment-75db98fb4c-5km72,UID:f22871b3-45af-11ea-a994-fa163e34d433,ResourceVersion:20299905,Generation:0,CreationTimestamp:2020-02-02 11:34:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c f2267ed2-45af-11ea-a994-fa163e34d433 0xc000bd32f7 0xc000bd32f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k9kg4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k9kg4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-k9kg4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd33a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd33d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:34:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:34:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:34:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:34:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-02 11:34:17 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-02 11:34:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://0a2d8277a53775fe62548dbbb5a6198a4c01c5622a63eaf46433554efe0e6841}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:34:27.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-7lwtz" for this suite.
Feb  2 11:34:35.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:34:35.503: INFO: namespace: e2e-tests-deployment-7lwtz, resource: bindings, ignored listing per whitelist
Feb  2 11:34:36.158: INFO: namespace e2e-tests-deployment-7lwtz deletion completed in 8.915851392s

• [SLOW TEST:28.971 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:34:36.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 11:34:36.528: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  2 11:34:36.577: INFO: Number of nodes with available pods: 0
Feb  2 11:34:36.577: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  2 11:34:36.715: INFO: Number of nodes with available pods: 0
Feb  2 11:34:36.715: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:37.741: INFO: Number of nodes with available pods: 0
Feb  2 11:34:37.741: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:38.731: INFO: Number of nodes with available pods: 0
Feb  2 11:34:38.731: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:39.729: INFO: Number of nodes with available pods: 0
Feb  2 11:34:39.729: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:40.734: INFO: Number of nodes with available pods: 0
Feb  2 11:34:40.734: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:41.725: INFO: Number of nodes with available pods: 0
Feb  2 11:34:41.725: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:42.728: INFO: Number of nodes with available pods: 0
Feb  2 11:34:42.728: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:43.733: INFO: Number of nodes with available pods: 1
Feb  2 11:34:43.733: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  2 11:34:43.869: INFO: Number of nodes with available pods: 1
Feb  2 11:34:43.869: INFO: Number of running nodes: 0, number of available pods: 1
Feb  2 11:34:44.886: INFO: Number of nodes with available pods: 0
Feb  2 11:34:44.886: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  2 11:34:44.917: INFO: Number of nodes with available pods: 0
Feb  2 11:34:44.917: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:45.952: INFO: Number of nodes with available pods: 0
Feb  2 11:34:45.952: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:46.948: INFO: Number of nodes with available pods: 0
Feb  2 11:34:46.948: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:47.929: INFO: Number of nodes with available pods: 0
Feb  2 11:34:47.929: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:48.931: INFO: Number of nodes with available pods: 0
Feb  2 11:34:48.931: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:49.934: INFO: Number of nodes with available pods: 0
Feb  2 11:34:49.934: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:50.927: INFO: Number of nodes with available pods: 0
Feb  2 11:34:50.927: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:51.930: INFO: Number of nodes with available pods: 0
Feb  2 11:34:51.930: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:52.972: INFO: Number of nodes with available pods: 0
Feb  2 11:34:52.972: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:53.929: INFO: Number of nodes with available pods: 0
Feb  2 11:34:53.929: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:54.939: INFO: Number of nodes with available pods: 0
Feb  2 11:34:54.939: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:55.929: INFO: Number of nodes with available pods: 0
Feb  2 11:34:55.929: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:57.278: INFO: Number of nodes with available pods: 0
Feb  2 11:34:57.278: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:57.937: INFO: Number of nodes with available pods: 0
Feb  2 11:34:57.937: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:58.986: INFO: Number of nodes with available pods: 0
Feb  2 11:34:58.987: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:34:59.934: INFO: Number of nodes with available pods: 0
Feb  2 11:34:59.934: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:35:00.928: INFO: Number of nodes with available pods: 1
Feb  2 11:35:00.928: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-cnk6w, will wait for the garbage collector to delete the pods
Feb  2 11:35:01.046: INFO: Deleting DaemonSet.extensions daemon-set took: 11.535747ms
Feb  2 11:35:01.146: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.482596ms
Feb  2 11:35:12.857: INFO: Number of nodes with available pods: 0
Feb  2 11:35:12.857: INFO: Number of running nodes: 0, number of available pods: 0
Feb  2 11:35:12.864: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-cnk6w/daemonsets","resourceVersion":"20300044"},"items":null}

Feb  2 11:35:12.868: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-cnk6w/pods","resourceVersion":"20300044"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:35:12.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-cnk6w" for this suite.
Feb  2 11:35:19.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:35:19.174: INFO: namespace: e2e-tests-daemonsets-cnk6w, resource: bindings, ignored listing per whitelist
Feb  2 11:35:19.183: INFO: namespace e2e-tests-daemonsets-cnk6w deletion completed in 6.203291331s

• [SLOW TEST:43.025 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:35:19.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 11:35:19.336: INFO: Creating deployment "test-recreate-deployment"
Feb  2 11:35:19.345: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  2 11:35:19.362: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb  2 11:35:21.616: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  2 11:35:21.620: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 11:35:23.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 11:35:25.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 11:35:27.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716240119, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 11:35:29.637: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  2 11:35:29.663: INFO: Updating deployment test-recreate-deployment
Feb  2 11:35:29.663: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  2 11:35:30.355: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-fw66g,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fw66g/deployments/test-recreate-deployment,UID:172986b1-45b0-11ea-a994-fa163e34d433,ResourceVersion:20300126,Generation:2,CreationTimestamp:2020-02-02 11:35:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-02 11:35:30 +0000 UTC 2020-02-02 11:35:30 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-02 11:35:30 +0000 UTC 2020-02-02 11:35:19 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb  2 11:35:30.379: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-fw66g,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fw66g/replicasets/test-recreate-deployment-589c4bfd,UID:1d80d50c-45b0-11ea-a994-fa163e34d433,ResourceVersion:20300124,Generation:1,CreationTimestamp:2020-02-02 11:35:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 172986b1-45b0-11ea-a994-fa163e34d433 0xc0009977ef 0xc000997800}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  2 11:35:30.379: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  2 11:35:30.379: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-fw66g,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fw66g/replicasets/test-recreate-deployment-5bf7f65dc,UID:172d0e60-45b0-11ea-a994-fa163e34d433,ResourceVersion:20300115,Generation:2,CreationTimestamp:2020-02-02 11:35:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 172986b1-45b0-11ea-a994-fa163e34d433 0xc0009978c0 0xc0009978c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  2 11:35:31.590: INFO: Pod "test-recreate-deployment-589c4bfd-x8thg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-x8thg,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-fw66g,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fw66g/pods/test-recreate-deployment-589c4bfd-x8thg,UID:1d8200c6-45b0-11ea-a994-fa163e34d433,ResourceVersion:20300127,Generation:0,CreationTimestamp:2020-02-02 11:35:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 1d80d50c-45b0-11ea-a994-fa163e34d433 0xc00248617f 0xc002486190}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wmx78 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wmx78,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-wmx78 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024861f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002486210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:35:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:35:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:35:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 11:35:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-02 11:35:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:35:31.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-fw66g" for this suite.
Feb  2 11:35:42.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:35:42.349: INFO: namespace: e2e-tests-deployment-fw66g, resource: bindings, ignored listing per whitelist
Feb  2 11:35:42.382: INFO: namespace e2e-tests-deployment-fw66g deletion completed in 10.55626799s

• [SLOW TEST:23.199 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:35:42.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:35:50.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-44fkd" for this suite.
Feb  2 11:35:57.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:35:57.332: INFO: namespace: e2e-tests-kubelet-test-44fkd, resource: bindings, ignored listing per whitelist
Feb  2 11:35:57.343: INFO: namespace e2e-tests-kubelet-test-44fkd deletion completed in 6.395647218s

• [SLOW TEST:14.961 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:35:57.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Feb  2 11:35:58.273: INFO: created pod pod-service-account-defaultsa
Feb  2 11:35:58.273: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  2 11:35:58.468: INFO: created pod pod-service-account-mountsa
Feb  2 11:35:58.469: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  2 11:35:58.539: INFO: created pod pod-service-account-nomountsa
Feb  2 11:35:58.540: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  2 11:35:58.577: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  2 11:35:58.577: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  2 11:35:58.745: INFO: created pod pod-service-account-mountsa-mountspec
Feb  2 11:35:58.745: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  2 11:35:58.826: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  2 11:35:58.826: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  2 11:35:59.073: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  2 11:35:59.073: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  2 11:35:59.118: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  2 11:35:59.118: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  2 11:35:59.411: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  2 11:35:59.411: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:35:59.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-mgmwh" for this suite.
Feb  2 11:36:27.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:36:27.602: INFO: namespace: e2e-tests-svcaccounts-mgmwh, resource: bindings, ignored listing per whitelist
Feb  2 11:36:27.721: INFO: namespace e2e-tests-svcaccounts-mgmwh deletion completed in 27.067336306s

• [SLOW TEST:30.378 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:36:27.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-dgbjm
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Feb  2 11:36:28.554: INFO: Found 0 stateful pods, waiting for 3
Feb  2 11:36:38.598: INFO: Found 2 stateful pods, waiting for 3
Feb  2 11:36:48.866: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 11:36:48.866: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 11:36:48.866: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  2 11:36:58.584: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 11:36:58.584: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 11:36:58.584: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 11:36:58.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dgbjm ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 11:36:59.329: INFO: stderr: "I0202 11:36:58.892387     418 log.go:172] (0xc00016c840) (0xc00066b2c0) Create stream\nI0202 11:36:58.892650     418 log.go:172] (0xc00016c840) (0xc00066b2c0) Stream added, broadcasting: 1\nI0202 11:36:58.902332     418 log.go:172] (0xc00016c840) Reply frame received for 1\nI0202 11:36:58.902532     418 log.go:172] (0xc00016c840) (0xc00066b360) Create stream\nI0202 11:36:58.902582     418 log.go:172] (0xc00016c840) (0xc00066b360) Stream added, broadcasting: 3\nI0202 11:36:58.911227     418 log.go:172] (0xc00016c840) Reply frame received for 3\nI0202 11:36:58.911312     418 log.go:172] (0xc00016c840) (0xc00072c000) Create stream\nI0202 11:36:58.911327     418 log.go:172] (0xc00016c840) (0xc00072c000) Stream added, broadcasting: 5\nI0202 11:36:58.913419     418 log.go:172] (0xc00016c840) Reply frame received for 5\nI0202 11:36:59.196014     418 log.go:172] (0xc00016c840) Data frame received for 3\nI0202 11:36:59.196085     418 log.go:172] (0xc00066b360) (3) Data frame handling\nI0202 11:36:59.196110     418 log.go:172] (0xc00066b360) (3) Data frame sent\nI0202 11:36:59.318238     418 log.go:172] (0xc00016c840) Data frame received for 1\nI0202 11:36:59.318309     418 log.go:172] (0xc00016c840) (0xc00072c000) Stream removed, broadcasting: 5\nI0202 11:36:59.318359     418 log.go:172] (0xc00066b2c0) (1) Data frame handling\nI0202 11:36:59.318371     418 log.go:172] (0xc00066b2c0) (1) Data frame sent\nI0202 11:36:59.318442     418 log.go:172] (0xc00016c840) (0xc00066b360) Stream removed, broadcasting: 3\nI0202 11:36:59.318463     418 log.go:172] (0xc00016c840) (0xc00066b2c0) Stream removed, broadcasting: 1\nI0202 11:36:59.318477     418 log.go:172] (0xc00016c840) Go away received\nI0202 11:36:59.319295     418 log.go:172] (0xc00016c840) (0xc00066b2c0) Stream removed, broadcasting: 1\nI0202 11:36:59.319309     418 log.go:172] (0xc00016c840) (0xc00066b360) Stream removed, broadcasting: 3\nI0202 11:36:59.319318     418 log.go:172] (0xc00016c840) (0xc00072c000) Stream removed, broadcasting: 5\n"
Feb  2 11:36:59.329: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 11:36:59.329: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  2 11:37:09.410: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  2 11:37:19.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dgbjm ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 11:37:20.157: INFO: stderr: "I0202 11:37:19.795395     440 log.go:172] (0xc0001a24d0) (0xc00059f360) Create stream\nI0202 11:37:19.795864     440 log.go:172] (0xc0001a24d0) (0xc00059f360) Stream added, broadcasting: 1\nI0202 11:37:19.803388     440 log.go:172] (0xc0001a24d0) Reply frame received for 1\nI0202 11:37:19.803447     440 log.go:172] (0xc0001a24d0) (0xc00059f400) Create stream\nI0202 11:37:19.803461     440 log.go:172] (0xc0001a24d0) (0xc00059f400) Stream added, broadcasting: 3\nI0202 11:37:19.804501     440 log.go:172] (0xc0001a24d0) Reply frame received for 3\nI0202 11:37:19.804544     440 log.go:172] (0xc0001a24d0) (0xc000752000) Create stream\nI0202 11:37:19.804556     440 log.go:172] (0xc0001a24d0) (0xc000752000) Stream added, broadcasting: 5\nI0202 11:37:19.805487     440 log.go:172] (0xc0001a24d0) Reply frame received for 5\nI0202 11:37:19.970360     440 log.go:172] (0xc0001a24d0) Data frame received for 3\nI0202 11:37:19.970884     440 log.go:172] (0xc00059f400) (3) Data frame handling\nI0202 11:37:19.971020     440 log.go:172] (0xc00059f400) (3) Data frame sent\nI0202 11:37:20.144591     440 log.go:172] (0xc0001a24d0) Data frame received for 1\nI0202 11:37:20.144703     440 log.go:172] (0xc00059f360) (1) Data frame handling\nI0202 11:37:20.144745     440 log.go:172] (0xc00059f360) (1) Data frame sent\nI0202 11:37:20.146038     440 log.go:172] (0xc0001a24d0) (0xc00059f400) Stream removed, broadcasting: 3\nI0202 11:37:20.146195     440 log.go:172] (0xc0001a24d0) (0xc00059f360) Stream removed, broadcasting: 1\nI0202 11:37:20.146306     440 log.go:172] (0xc0001a24d0) (0xc000752000) Stream removed, broadcasting: 5\nI0202 11:37:20.146358     440 log.go:172] (0xc0001a24d0) Go away received\nI0202 11:37:20.147452     440 log.go:172] (0xc0001a24d0) (0xc00059f360) Stream removed, broadcasting: 1\nI0202 11:37:20.147491     440 log.go:172] (0xc0001a24d0) (0xc00059f400) Stream removed, broadcasting: 3\nI0202 11:37:20.147504     440 log.go:172] (0xc0001a24d0) (0xc000752000) Stream removed, broadcasting: 5\n"
Feb  2 11:37:20.158: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 11:37:20.158: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 11:37:30.271: INFO: Waiting for StatefulSet e2e-tests-statefulset-dgbjm/ss2 to complete update
Feb  2 11:37:30.271: INFO: Waiting for Pod e2e-tests-statefulset-dgbjm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 11:37:30.271: INFO: Waiting for Pod e2e-tests-statefulset-dgbjm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 11:37:30.271: INFO: Waiting for Pod e2e-tests-statefulset-dgbjm/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 11:37:40.298: INFO: Waiting for StatefulSet e2e-tests-statefulset-dgbjm/ss2 to complete update
Feb  2 11:37:40.298: INFO: Waiting for Pod e2e-tests-statefulset-dgbjm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 11:37:40.298: INFO: Waiting for Pod e2e-tests-statefulset-dgbjm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 11:37:50.293: INFO: Waiting for StatefulSet e2e-tests-statefulset-dgbjm/ss2 to complete update
Feb  2 11:37:50.293: INFO: Waiting for Pod e2e-tests-statefulset-dgbjm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 11:37:50.293: INFO: Waiting for Pod e2e-tests-statefulset-dgbjm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 11:38:00.303: INFO: Waiting for StatefulSet e2e-tests-statefulset-dgbjm/ss2 to complete update
Feb  2 11:38:00.304: INFO: Waiting for Pod e2e-tests-statefulset-dgbjm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 11:38:10.290: INFO: Waiting for StatefulSet e2e-tests-statefulset-dgbjm/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  2 11:38:20.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dgbjm ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 11:38:21.093: INFO: stderr: "I0202 11:38:20.721778     463 log.go:172] (0xc000736160) (0xc0006a85a0) Create stream\nI0202 11:38:20.722649     463 log.go:172] (0xc000736160) (0xc0006a85a0) Stream added, broadcasting: 1\nI0202 11:38:20.731917     463 log.go:172] (0xc000736160) Reply frame received for 1\nI0202 11:38:20.732167     463 log.go:172] (0xc000736160) (0xc00087a0a0) Create stream\nI0202 11:38:20.732208     463 log.go:172] (0xc000736160) (0xc00087a0a0) Stream added, broadcasting: 3\nI0202 11:38:20.734007     463 log.go:172] (0xc000736160) Reply frame received for 3\nI0202 11:38:20.734043     463 log.go:172] (0xc000736160) (0xc00087a140) Create stream\nI0202 11:38:20.734051     463 log.go:172] (0xc000736160) (0xc00087a140) Stream added, broadcasting: 5\nI0202 11:38:20.735526     463 log.go:172] (0xc000736160) Reply frame received for 5\nI0202 11:38:20.934647     463 log.go:172] (0xc000736160) Data frame received for 3\nI0202 11:38:20.934817     463 log.go:172] (0xc00087a0a0) (3) Data frame handling\nI0202 11:38:20.934895     463 log.go:172] (0xc00087a0a0) (3) Data frame sent\nI0202 11:38:21.077697     463 log.go:172] (0xc000736160) Data frame received for 1\nI0202 11:38:21.077852     463 log.go:172] (0xc0006a85a0) (1) Data frame handling\nI0202 11:38:21.077915     463 log.go:172] (0xc0006a85a0) (1) Data frame sent\nI0202 11:38:21.077958     463 log.go:172] (0xc000736160) (0xc0006a85a0) Stream removed, broadcasting: 1\nI0202 11:38:21.081148     463 log.go:172] (0xc000736160) (0xc00087a140) Stream removed, broadcasting: 5\nI0202 11:38:21.081217     463 log.go:172] (0xc000736160) (0xc00087a0a0) Stream removed, broadcasting: 3\nI0202 11:38:21.081361     463 log.go:172] (0xc000736160) (0xc0006a85a0) Stream removed, broadcasting: 1\nI0202 11:38:21.081370     463 log.go:172] (0xc000736160) (0xc00087a0a0) Stream removed, broadcasting: 3\nI0202 11:38:21.081378     463 log.go:172] (0xc000736160) (0xc00087a140) Stream removed, broadcasting: 5\n"
Feb  2 11:38:21.093: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 11:38:21.093: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 11:38:31.181: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  2 11:38:41.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dgbjm ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 11:38:41.838: INFO: stderr: "I0202 11:38:41.483215     484 log.go:172] (0xc0001386e0) (0xc00072a640) Create stream\nI0202 11:38:41.483549     484 log.go:172] (0xc0001386e0) (0xc00072a640) Stream added, broadcasting: 1\nI0202 11:38:41.488913     484 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0202 11:38:41.488986     484 log.go:172] (0xc0001386e0) (0xc0005eac80) Create stream\nI0202 11:38:41.489000     484 log.go:172] (0xc0001386e0) (0xc0005eac80) Stream added, broadcasting: 3\nI0202 11:38:41.490294     484 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0202 11:38:41.490316     484 log.go:172] (0xc0001386e0) (0xc0005eadc0) Create stream\nI0202 11:38:41.490327     484 log.go:172] (0xc0001386e0) (0xc0005eadc0) Stream added, broadcasting: 5\nI0202 11:38:41.491283     484 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0202 11:38:41.645396     484 log.go:172] (0xc0001386e0) Data frame received for 3\nI0202 11:38:41.645599     484 log.go:172] (0xc0005eac80) (3) Data frame handling\nI0202 11:38:41.645648     484 log.go:172] (0xc0005eac80) (3) Data frame sent\nI0202 11:38:41.817288     484 log.go:172] (0xc0001386e0) Data frame received for 1\nI0202 11:38:41.817545     484 log.go:172] (0xc00072a640) (1) Data frame handling\nI0202 11:38:41.817598     484 log.go:172] (0xc00072a640) (1) Data frame sent\nI0202 11:38:41.819701     484 log.go:172] (0xc0001386e0) (0xc00072a640) Stream removed, broadcasting: 1\nI0202 11:38:41.820952     484 log.go:172] (0xc0001386e0) (0xc0005eac80) Stream removed, broadcasting: 3\nI0202 11:38:41.821062     484 log.go:172] (0xc0001386e0) (0xc0005eadc0) Stream removed, broadcasting: 5\nI0202 11:38:41.821161     484 log.go:172] (0xc0001386e0) (0xc00072a640) Stream removed, broadcasting: 1\nI0202 11:38:41.821178     484 log.go:172] (0xc0001386e0) (0xc0005eac80) Stream removed, broadcasting: 3\nI0202 11:38:41.821200     484 log.go:172] (0xc0001386e0) (0xc0005eadc0) Stream removed, broadcasting: 5\n"
Feb  2 11:38:41.838: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 11:38:41.838: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 11:38:51.934: INFO: Waiting for StatefulSet e2e-tests-statefulset-dgbjm/ss2 to complete update
Feb  2 11:38:51.934: INFO: Waiting for Pod e2e-tests-statefulset-dgbjm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  2 11:38:51.934: INFO: Waiting for Pod e2e-tests-statefulset-dgbjm/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  2 11:39:01.973: INFO: Waiting for StatefulSet e2e-tests-statefulset-dgbjm/ss2 to complete update
Feb  2 11:39:01.974: INFO: Waiting for Pod e2e-tests-statefulset-dgbjm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  2 11:39:01.974: INFO: Waiting for Pod e2e-tests-statefulset-dgbjm/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  2 11:39:11.966: INFO: Waiting for StatefulSet e2e-tests-statefulset-dgbjm/ss2 to complete update
Feb  2 11:39:11.966: INFO: Waiting for Pod e2e-tests-statefulset-dgbjm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  2 11:39:21.962: INFO: Waiting for StatefulSet e2e-tests-statefulset-dgbjm/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  2 11:39:31.959: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dgbjm
Feb  2 11:39:31.969: INFO: Scaling statefulset ss2 to 0
Feb  2 11:40:12.120: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 11:40:12.131: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:40:12.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-dgbjm" for this suite.
Feb  2 11:40:20.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:40:20.322: INFO: namespace: e2e-tests-statefulset-dgbjm, resource: bindings, ignored listing per whitelist
Feb  2 11:40:20.395: INFO: namespace e2e-tests-statefulset-dgbjm deletion completed in 8.202584992s

• [SLOW TEST:232.673 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:40:20.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 11:40:20.794: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cad54220-45b0-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-qm7b7" to be "success or failure"
Feb  2 11:40:20.804: INFO: Pod "downwardapi-volume-cad54220-45b0-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.617246ms
Feb  2 11:40:22.841: INFO: Pod "downwardapi-volume-cad54220-45b0-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046656338s
Feb  2 11:40:24.928: INFO: Pod "downwardapi-volume-cad54220-45b0-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133853798s
Feb  2 11:40:27.150: INFO: Pod "downwardapi-volume-cad54220-45b0-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.356251454s
Feb  2 11:40:29.181: INFO: Pod "downwardapi-volume-cad54220-45b0-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.387261415s
Feb  2 11:40:31.197: INFO: Pod "downwardapi-volume-cad54220-45b0-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.402739747s
STEP: Saw pod success
Feb  2 11:40:31.197: INFO: Pod "downwardapi-volume-cad54220-45b0-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:40:31.201: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cad54220-45b0-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 11:40:32.510: INFO: Waiting for pod downwardapi-volume-cad54220-45b0-11ea-8b99-0242ac110005 to disappear
Feb  2 11:40:32.525: INFO: Pod downwardapi-volume-cad54220-45b0-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:40:32.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qm7b7" for this suite.
Feb  2 11:40:38.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:40:38.747: INFO: namespace: e2e-tests-projected-qm7b7, resource: bindings, ignored listing per whitelist
Feb  2 11:40:38.955: INFO: namespace e2e-tests-projected-qm7b7 deletion completed in 6.392171062s

• [SLOW TEST:18.560 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:40:38.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-mpmck
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  2 11:40:39.144: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  2 11:41:15.501: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-mpmck PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 11:41:15.501: INFO: >>> kubeConfig: /root/.kube/config
I0202 11:41:15.564897       9 log.go:172] (0xc001d9c370) (0xc002596fa0) Create stream
I0202 11:41:15.564955       9 log.go:172] (0xc001d9c370) (0xc002596fa0) Stream added, broadcasting: 1
I0202 11:41:15.573321       9 log.go:172] (0xc001d9c370) Reply frame received for 1
I0202 11:41:15.573354       9 log.go:172] (0xc001d9c370) (0xc0012eef00) Create stream
I0202 11:41:15.573367       9 log.go:172] (0xc001d9c370) (0xc0012eef00) Stream added, broadcasting: 3
I0202 11:41:15.574531       9 log.go:172] (0xc001d9c370) Reply frame received for 3
I0202 11:41:15.574593       9 log.go:172] (0xc001d9c370) (0xc001269040) Create stream
I0202 11:41:15.574608       9 log.go:172] (0xc001d9c370) (0xc001269040) Stream added, broadcasting: 5
I0202 11:41:15.575797       9 log.go:172] (0xc001d9c370) Reply frame received for 5
I0202 11:41:15.770462       9 log.go:172] (0xc001d9c370) Data frame received for 3
I0202 11:41:15.770505       9 log.go:172] (0xc0012eef00) (3) Data frame handling
I0202 11:41:15.770539       9 log.go:172] (0xc0012eef00) (3) Data frame sent
I0202 11:41:15.920069       9 log.go:172] (0xc001d9c370) (0xc0012eef00) Stream removed, broadcasting: 3
I0202 11:41:15.920353       9 log.go:172] (0xc001d9c370) Data frame received for 1
I0202 11:41:15.920403       9 log.go:172] (0xc002596fa0) (1) Data frame handling
I0202 11:41:15.920453       9 log.go:172] (0xc002596fa0) (1) Data frame sent
I0202 11:41:15.920474       9 log.go:172] (0xc001d9c370) (0xc002596fa0) Stream removed, broadcasting: 1
I0202 11:41:15.920584       9 log.go:172] (0xc001d9c370) (0xc001269040) Stream removed, broadcasting: 5
I0202 11:41:15.920649       9 log.go:172] (0xc001d9c370) Go away received
I0202 11:41:15.920833       9 log.go:172] (0xc001d9c370) (0xc002596fa0) Stream removed, broadcasting: 1
I0202 11:41:15.920874       9 log.go:172] (0xc001d9c370) (0xc0012eef00) Stream removed, broadcasting: 3
I0202 11:41:15.920898       9 log.go:172] (0xc001d9c370) (0xc001269040) Stream removed, broadcasting: 5
Feb  2 11:41:15.920: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:41:15.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-mpmck" for this suite.
Feb  2 11:41:40.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:41:40.083: INFO: namespace: e2e-tests-pod-network-test-mpmck, resource: bindings, ignored listing per whitelist
Feb  2 11:41:40.174: INFO: namespace e2e-tests-pod-network-test-mpmck deletion completed in 24.226962505s

• [SLOW TEST:61.218 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:41:40.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-fa3d0e5c-45b0-11ea-8b99-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:41:50.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9n7ht" for this suite.
Feb  2 11:42:14.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:42:14.804: INFO: namespace: e2e-tests-configmap-9n7ht, resource: bindings, ignored listing per whitelist
Feb  2 11:42:14.827: INFO: namespace e2e-tests-configmap-9n7ht deletion completed in 24.237739595s

• [SLOW TEST:34.652 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:42:14.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-zz6pc
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-zz6pc
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-zz6pc
Feb  2 11:42:15.139: INFO: Found 0 stateful pods, waiting for 1
Feb  2 11:42:25.157: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  2 11:42:25.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zz6pc ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 11:42:25.957: INFO: stderr: "I0202 11:42:25.500411     506 log.go:172] (0xc000712370) (0xc000732640) Create stream\nI0202 11:42:25.500809     506 log.go:172] (0xc000712370) (0xc000732640) Stream added, broadcasting: 1\nI0202 11:42:25.512426     506 log.go:172] (0xc000712370) Reply frame received for 1\nI0202 11:42:25.512625     506 log.go:172] (0xc000712370) (0xc00079cbe0) Create stream\nI0202 11:42:25.512683     506 log.go:172] (0xc000712370) (0xc00079cbe0) Stream added, broadcasting: 3\nI0202 11:42:25.514754     506 log.go:172] (0xc000712370) Reply frame received for 3\nI0202 11:42:25.514809     506 log.go:172] (0xc000712370) (0xc0007326e0) Create stream\nI0202 11:42:25.514824     506 log.go:172] (0xc000712370) (0xc0007326e0) Stream added, broadcasting: 5\nI0202 11:42:25.516380     506 log.go:172] (0xc000712370) Reply frame received for 5\nI0202 11:42:25.827689     506 log.go:172] (0xc000712370) Data frame received for 3\nI0202 11:42:25.827835     506 log.go:172] (0xc00079cbe0) (3) Data frame handling\nI0202 11:42:25.827864     506 log.go:172] (0xc00079cbe0) (3) Data frame sent\nI0202 11:42:25.948665     506 log.go:172] (0xc000712370) Data frame received for 1\nI0202 11:42:25.948980     506 log.go:172] (0xc000712370) (0xc00079cbe0) Stream removed, broadcasting: 3\nI0202 11:42:25.949044     506 log.go:172] (0xc000732640) (1) Data frame handling\nI0202 11:42:25.949065     506 log.go:172] (0xc000732640) (1) Data frame sent\nI0202 11:42:25.949131     506 log.go:172] (0xc000712370) (0xc0007326e0) Stream removed, broadcasting: 5\nI0202 11:42:25.949518     506 log.go:172] (0xc000712370) (0xc000732640) Stream removed, broadcasting: 1\nI0202 11:42:25.949834     506 log.go:172] (0xc000712370) Go away received\nI0202 11:42:25.950413     506 log.go:172] (0xc000712370) (0xc000732640) Stream removed, broadcasting: 1\nI0202 11:42:25.950503     506 log.go:172] (0xc000712370) (0xc00079cbe0) Stream removed, broadcasting: 3\nI0202 11:42:25.950629     506 log.go:172] (0xc000712370) (0xc0007326e0) Stream removed, broadcasting: 5\n"
Feb  2 11:42:25.957: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 11:42:25.957: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 11:42:26.014: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  2 11:42:36.038: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  2 11:42:36.038: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 11:42:36.119: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999529s
Feb  2 11:42:37.128: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.941677102s
Feb  2 11:42:38.151: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.932616092s
Feb  2 11:42:39.265: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.909613263s
Feb  2 11:42:40.284: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.796187158s
Feb  2 11:42:41.301: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.7769971s
Feb  2 11:42:42.326: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.760080376s
Feb  2 11:42:43.342: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.734829088s
Feb  2 11:42:44.364: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.718960707s
Feb  2 11:42:45.381: INFO: Verifying statefulset ss doesn't scale past 1 for another 697.1502ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-zz6pc
Feb  2 11:42:46.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zz6pc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 11:42:47.081: INFO: stderr: "I0202 11:42:46.799834     529 log.go:172] (0xc000736370) (0xc0007c4640) Create stream\nI0202 11:42:46.800124     529 log.go:172] (0xc000736370) (0xc0007c4640) Stream added, broadcasting: 1\nI0202 11:42:46.806537     529 log.go:172] (0xc000736370) Reply frame received for 1\nI0202 11:42:46.806603     529 log.go:172] (0xc000736370) (0xc0005d4d20) Create stream\nI0202 11:42:46.806613     529 log.go:172] (0xc000736370) (0xc0005d4d20) Stream added, broadcasting: 3\nI0202 11:42:46.807840     529 log.go:172] (0xc000736370) Reply frame received for 3\nI0202 11:42:46.807866     529 log.go:172] (0xc000736370) (0xc0007c46e0) Create stream\nI0202 11:42:46.807875     529 log.go:172] (0xc000736370) (0xc0007c46e0) Stream added, broadcasting: 5\nI0202 11:42:46.809462     529 log.go:172] (0xc000736370) Reply frame received for 5\nI0202 11:42:46.923845     529 log.go:172] (0xc000736370) Data frame received for 3\nI0202 11:42:46.924004     529 log.go:172] (0xc0005d4d20) (3) Data frame handling\nI0202 11:42:46.924045     529 log.go:172] (0xc0005d4d20) (3) Data frame sent\nI0202 11:42:47.067873     529 log.go:172] (0xc000736370) Data frame received for 1\nI0202 11:42:47.068427     529 log.go:172] (0xc000736370) (0xc0007c46e0) Stream removed, broadcasting: 5\nI0202 11:42:47.068592     529 log.go:172] (0xc0007c4640) (1) Data frame handling\nI0202 11:42:47.068656     529 log.go:172] (0xc0007c4640) (1) Data frame sent\nI0202 11:42:47.068729     529 log.go:172] (0xc000736370) (0xc0007c4640) Stream removed, broadcasting: 1\nI0202 11:42:47.068870     529 log.go:172] (0xc000736370) (0xc0005d4d20) Stream removed, broadcasting: 3\nI0202 11:42:47.069172     529 log.go:172] (0xc000736370) Go away received\nI0202 11:42:47.069320     529 log.go:172] (0xc000736370) (0xc0007c4640) Stream removed, broadcasting: 1\nI0202 11:42:47.069376     529 log.go:172] (0xc000736370) (0xc0005d4d20) Stream removed, broadcasting: 3\nI0202 11:42:47.069394     529 log.go:172] (0xc000736370) (0xc0007c46e0) Stream removed, broadcasting: 5\n"
Feb  2 11:42:47.081: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 11:42:47.081: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 11:42:47.106: INFO: Found 1 stateful pods, waiting for 3
Feb  2 11:42:57.413: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 11:42:57.413: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 11:42:57.413: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  2 11:43:07.129: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 11:43:07.129: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 11:43:07.129: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  2 11:43:07.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zz6pc ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 11:43:07.800: INFO: stderr: "I0202 11:43:07.439931     552 log.go:172] (0xc000138580) (0xc00053d360) Create stream\nI0202 11:43:07.440476     552 log.go:172] (0xc000138580) (0xc00053d360) Stream added, broadcasting: 1\nI0202 11:43:07.450372     552 log.go:172] (0xc000138580) Reply frame received for 1\nI0202 11:43:07.450445     552 log.go:172] (0xc000138580) (0xc0005ba000) Create stream\nI0202 11:43:07.450460     552 log.go:172] (0xc000138580) (0xc0005ba000) Stream added, broadcasting: 3\nI0202 11:43:07.452450     552 log.go:172] (0xc000138580) Reply frame received for 3\nI0202 11:43:07.452585     552 log.go:172] (0xc000138580) (0xc00053d400) Create stream\nI0202 11:43:07.452602     552 log.go:172] (0xc000138580) (0xc00053d400) Stream added, broadcasting: 5\nI0202 11:43:07.454227     552 log.go:172] (0xc000138580) Reply frame received for 5\nI0202 11:43:07.638481     552 log.go:172] (0xc000138580) Data frame received for 3\nI0202 11:43:07.638640     552 log.go:172] (0xc0005ba000) (3) Data frame handling\nI0202 11:43:07.638674     552 log.go:172] (0xc0005ba000) (3) Data frame sent\nI0202 11:43:07.786110     552 log.go:172] (0xc000138580) Data frame received for 1\nI0202 11:43:07.786278     552 log.go:172] (0xc000138580) (0xc00053d400) Stream removed, broadcasting: 5\nI0202 11:43:07.786393     552 log.go:172] (0xc00053d360) (1) Data frame handling\nI0202 11:43:07.786441     552 log.go:172] (0xc00053d360) (1) Data frame sent\nI0202 11:43:07.786494     552 log.go:172] (0xc000138580) (0xc0005ba000) Stream removed, broadcasting: 3\nI0202 11:43:07.786580     552 log.go:172] (0xc000138580) (0xc00053d360) Stream removed, broadcasting: 1\nI0202 11:43:07.786634     552 log.go:172] (0xc000138580) Go away received\nI0202 11:43:07.787430     552 log.go:172] (0xc000138580) (0xc00053d360) Stream removed, broadcasting: 1\nI0202 11:43:07.787462     552 log.go:172] (0xc000138580) (0xc0005ba000) Stream removed, broadcasting: 3\nI0202 11:43:07.787481     552 log.go:172] (0xc000138580) (0xc00053d400) Stream removed, broadcasting: 5\n"
Feb  2 11:43:07.800: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 11:43:07.800: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 11:43:07.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zz6pc ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 11:43:08.685: INFO: stderr: "I0202 11:43:08.156756     574 log.go:172] (0xc00060e2c0) (0xc00055a780) Create stream\nI0202 11:43:08.157032     574 log.go:172] (0xc00060e2c0) (0xc00055a780) Stream added, broadcasting: 1\nI0202 11:43:08.163947     574 log.go:172] (0xc00060e2c0) Reply frame received for 1\nI0202 11:43:08.164012     574 log.go:172] (0xc00060e2c0) (0xc00055a820) Create stream\nI0202 11:43:08.164019     574 log.go:172] (0xc00060e2c0) (0xc00055a820) Stream added, broadcasting: 3\nI0202 11:43:08.164897     574 log.go:172] (0xc00060e2c0) Reply frame received for 3\nI0202 11:43:08.164957     574 log.go:172] (0xc00060e2c0) (0xc0005e4e60) Create stream\nI0202 11:43:08.164967     574 log.go:172] (0xc00060e2c0) (0xc0005e4e60) Stream added, broadcasting: 5\nI0202 11:43:08.165833     574 log.go:172] (0xc00060e2c0) Reply frame received for 5\nI0202 11:43:08.340491     574 log.go:172] (0xc00060e2c0) Data frame received for 3\nI0202 11:43:08.340569     574 log.go:172] (0xc00055a820) (3) Data frame handling\nI0202 11:43:08.340600     574 log.go:172] (0xc00055a820) (3) Data frame sent\nI0202 11:43:08.655732     574 log.go:172] (0xc00060e2c0) Data frame received for 1\nI0202 11:43:08.655930     574 log.go:172] (0xc00060e2c0) (0xc00055a820) Stream removed, broadcasting: 3\nI0202 11:43:08.656100     574 log.go:172] (0xc00055a780) (1) Data frame handling\nI0202 11:43:08.656166     574 log.go:172] (0xc00055a780) (1) Data frame sent\nI0202 11:43:08.656264     574 log.go:172] (0xc00060e2c0) (0xc0005e4e60) Stream removed, broadcasting: 5\nI0202 11:43:08.656376     574 log.go:172] (0xc00060e2c0) (0xc00055a780) Stream removed, broadcasting: 1\nI0202 11:43:08.657290     574 log.go:172] (0xc00060e2c0) (0xc00055a780) Stream removed, broadcasting: 1\nI0202 11:43:08.657322     574 log.go:172] (0xc00060e2c0) (0xc00055a820) Stream removed, broadcasting: 3\nI0202 11:43:08.657356     574 log.go:172] (0xc00060e2c0) (0xc0005e4e60) Stream removed, broadcasting: 5\n"
Feb  2 11:43:08.685: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 11:43:08.685: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 11:43:08.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zz6pc ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 11:43:09.324: INFO: stderr: "I0202 11:43:08.949058     595 log.go:172] (0xc0006ec370) (0xc00079a640) Create stream\nI0202 11:43:08.949665     595 log.go:172] (0xc0006ec370) (0xc00079a640) Stream added, broadcasting: 1\nI0202 11:43:08.985035     595 log.go:172] (0xc0006ec370) Reply frame received for 1\nI0202 11:43:08.985200     595 log.go:172] (0xc0006ec370) (0xc00063ad20) Create stream\nI0202 11:43:08.985217     595 log.go:172] (0xc0006ec370) (0xc00063ad20) Stream added, broadcasting: 3\nI0202 11:43:08.987514     595 log.go:172] (0xc0006ec370) Reply frame received for 3\nI0202 11:43:08.987594     595 log.go:172] (0xc0006ec370) (0xc00079a6e0) Create stream\nI0202 11:43:08.987643     595 log.go:172] (0xc0006ec370) (0xc00079a6e0) Stream added, broadcasting: 5\nI0202 11:43:08.988827     595 log.go:172] (0xc0006ec370) Reply frame received for 5\nI0202 11:43:09.216511     595 log.go:172] (0xc0006ec370) Data frame received for 3\nI0202 11:43:09.216569     595 log.go:172] (0xc00063ad20) (3) Data frame handling\nI0202 11:43:09.216584     595 log.go:172] (0xc00063ad20) (3) Data frame sent\nI0202 11:43:09.316697     595 log.go:172] (0xc0006ec370) Data frame received for 1\nI0202 11:43:09.316770     595 log.go:172] (0xc00079a640) (1) Data frame handling\nI0202 11:43:09.316806     595 log.go:172] (0xc00079a640) (1) Data frame sent\nI0202 11:43:09.317061     595 log.go:172] (0xc0006ec370) (0xc00079a640) Stream removed, broadcasting: 1\nI0202 11:43:09.317224     595 log.go:172] (0xc0006ec370) (0xc00063ad20) Stream removed, broadcasting: 3\nI0202 11:43:09.318035     595 log.go:172] (0xc0006ec370) (0xc00079a6e0) Stream removed, broadcasting: 5\nI0202 11:43:09.318174     595 log.go:172] (0xc0006ec370) (0xc00079a640) Stream removed, broadcasting: 1\nI0202 11:43:09.318245     595 log.go:172] (0xc0006ec370) (0xc00063ad20) Stream removed, broadcasting: 3\nI0202 11:43:09.318256     595 log.go:172] (0xc0006ec370) (0xc00079a6e0) Stream removed, broadcasting: 5\nI0202 11:43:09.318481     595 log.go:172] (0xc0006ec370) Go away received\n"
Feb  2 11:43:09.324: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 11:43:09.324: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 11:43:09.324: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 11:43:09.332: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  2 11:43:19.375: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  2 11:43:19.376: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  2 11:43:19.376: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  2 11:43:19.405: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999442s
Feb  2 11:43:20.421: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989115116s
Feb  2 11:43:21.436: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972203714s
Feb  2 11:43:22.491: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.957661702s
Feb  2 11:43:23.511: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.902644225s
Feb  2 11:43:24.529: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.882209113s
Feb  2 11:43:25.549: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.864206119s
Feb  2 11:43:26.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.844930998s
Feb  2 11:43:27.628: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.777091918s
Feb  2 11:43:28.677: INFO: Verifying statefulset ss doesn't scale past 3 for another 765.308172ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-zz6pc
Feb  2 11:43:29.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zz6pc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 11:43:30.434: INFO: stderr: "I0202 11:43:30.142294     616 log.go:172] (0xc0001380b0) (0xc0007f06e0) Create stream\nI0202 11:43:30.142710     616 log.go:172] (0xc0001380b0) (0xc0007f06e0) Stream added, broadcasting: 1\nI0202 11:43:30.148367     616 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0202 11:43:30.148400     616 log.go:172] (0xc0001380b0) (0xc0004c4b40) Create stream\nI0202 11:43:30.148409     616 log.go:172] (0xc0001380b0) (0xc0004c4b40) Stream added, broadcasting: 3\nI0202 11:43:30.149432     616 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0202 11:43:30.149452     616 log.go:172] (0xc0001380b0) (0xc0007f0780) Create stream\nI0202 11:43:30.149460     616 log.go:172] (0xc0001380b0) (0xc0007f0780) Stream added, broadcasting: 5\nI0202 11:43:30.152888     616 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0202 11:43:30.259429     616 log.go:172] (0xc0001380b0) Data frame received for 3\nI0202 11:43:30.259565     616 log.go:172] (0xc0004c4b40) (3) Data frame handling\nI0202 11:43:30.259601     616 log.go:172] (0xc0004c4b40) (3) Data frame sent\nI0202 11:43:30.423417     616 log.go:172] (0xc0001380b0) Data frame received for 1\nI0202 11:43:30.423642     616 log.go:172] (0xc0001380b0) (0xc0004c4b40) Stream removed, broadcasting: 3\nI0202 11:43:30.423841     616 log.go:172] (0xc0007f06e0) (1) Data frame handling\nI0202 11:43:30.423901     616 log.go:172] (0xc0007f06e0) (1) Data frame sent\nI0202 11:43:30.424114     616 log.go:172] (0xc0001380b0) (0xc0007f06e0) Stream removed, broadcasting: 1\nI0202 11:43:30.424171     616 log.go:172] (0xc0001380b0) (0xc0007f0780) Stream removed, broadcasting: 5\nI0202 11:43:30.424215     616 log.go:172] (0xc0001380b0) Go away received\nI0202 11:43:30.424800     616 log.go:172] (0xc0001380b0) (0xc0007f06e0) Stream removed, broadcasting: 1\nI0202 11:43:30.424812     616 log.go:172] (0xc0001380b0) (0xc0004c4b40) Stream removed, broadcasting: 3\nI0202 11:43:30.424816     616 log.go:172] (0xc0001380b0) (0xc0007f0780) Stream removed, broadcasting: 5\n"
Feb  2 11:43:30.434: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 11:43:30.434: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 11:43:30.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zz6pc ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 11:43:30.998: INFO: stderr: "I0202 11:43:30.697721     638 log.go:172] (0xc00015c630) (0xc0007af400) Create stream\nI0202 11:43:30.698010     638 log.go:172] (0xc00015c630) (0xc0007af400) Stream added, broadcasting: 1\nI0202 11:43:30.702356     638 log.go:172] (0xc00015c630) Reply frame received for 1\nI0202 11:43:30.702402     638 log.go:172] (0xc00015c630) (0xc0005b0000) Create stream\nI0202 11:43:30.702416     638 log.go:172] (0xc00015c630) (0xc0005b0000) Stream added, broadcasting: 3\nI0202 11:43:30.703534     638 log.go:172] (0xc00015c630) Reply frame received for 3\nI0202 11:43:30.703558     638 log.go:172] (0xc00015c630) (0xc00022c000) Create stream\nI0202 11:43:30.703566     638 log.go:172] (0xc00015c630) (0xc00022c000) Stream added, broadcasting: 5\nI0202 11:43:30.705132     638 log.go:172] (0xc00015c630) Reply frame received for 5\nI0202 11:43:30.878441     638 log.go:172] (0xc00015c630) Data frame received for 3\nI0202 11:43:30.878744     638 log.go:172] (0xc0005b0000) (3) Data frame handling\nI0202 11:43:30.878803     638 log.go:172] (0xc0005b0000) (3) Data frame sent\nI0202 11:43:30.991860     638 log.go:172] (0xc00015c630) (0xc0005b0000) Stream removed, broadcasting: 3\nI0202 11:43:30.992145     638 log.go:172] (0xc00015c630) Data frame received for 1\nI0202 11:43:30.992239     638 log.go:172] (0xc0007af400) (1) Data frame handling\nI0202 11:43:30.992404     638 log.go:172] (0xc00015c630) (0xc00022c000) Stream removed, broadcasting: 5\nI0202 11:43:30.992499     638 log.go:172] (0xc0007af400) (1) Data frame sent\nI0202 11:43:30.992527     638 log.go:172] (0xc00015c630) (0xc0007af400) Stream removed, broadcasting: 1\nI0202 11:43:30.992548     638 log.go:172] (0xc00015c630) Go away received\nI0202 11:43:30.992924     638 log.go:172] (0xc00015c630) (0xc0007af400) Stream removed, broadcasting: 1\nI0202 11:43:30.992937     638 log.go:172] (0xc00015c630) (0xc0005b0000) Stream removed, broadcasting: 3\nI0202 11:43:30.992944     638 log.go:172] (0xc00015c630) (0xc00022c000) Stream removed, broadcasting: 5\n"
Feb  2 11:43:30.999: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 11:43:30.999: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 11:43:30.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zz6pc ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 11:43:31.449: INFO: stderr: "I0202 11:43:31.211602     658 log.go:172] (0xc000884210) (0xc0008805a0) Create stream\nI0202 11:43:31.211780     658 log.go:172] (0xc000884210) (0xc0008805a0) Stream added, broadcasting: 1\nI0202 11:43:31.217808     658 log.go:172] (0xc000884210) Reply frame received for 1\nI0202 11:43:31.217871     658 log.go:172] (0xc000884210) (0xc0006e2000) Create stream\nI0202 11:43:31.217888     658 log.go:172] (0xc000884210) (0xc0006e2000) Stream added, broadcasting: 3\nI0202 11:43:31.219772     658 log.go:172] (0xc000884210) Reply frame received for 3\nI0202 11:43:31.219857     658 log.go:172] (0xc000884210) (0xc000392d20) Create stream\nI0202 11:43:31.219870     658 log.go:172] (0xc000884210) (0xc000392d20) Stream added, broadcasting: 5\nI0202 11:43:31.220879     658 log.go:172] (0xc000884210) Reply frame received for 5\nI0202 11:43:31.318698     658 log.go:172] (0xc000884210) Data frame received for 3\nI0202 11:43:31.318816     658 log.go:172] (0xc0006e2000) (3) Data frame handling\nI0202 11:43:31.318909     658 log.go:172] (0xc0006e2000) (3) Data frame sent\nI0202 11:43:31.438968     658 log.go:172] (0xc000884210) Data frame received for 1\nI0202 11:43:31.439051     658 log.go:172] (0xc0008805a0) (1) Data frame handling\nI0202 11:43:31.439076     658 log.go:172] (0xc0008805a0) (1) Data frame sent\nI0202 11:43:31.439091     658 log.go:172] (0xc000884210) (0xc0008805a0) Stream removed, broadcasting: 1\nI0202 11:43:31.439646     658 log.go:172] (0xc000884210) (0xc0006e2000) Stream removed, broadcasting: 3\nI0202 11:43:31.440426     658 log.go:172] (0xc000884210) (0xc000392d20) Stream removed, broadcasting: 5\nI0202 11:43:31.440556     658 log.go:172] (0xc000884210) (0xc0008805a0) Stream removed, broadcasting: 1\nI0202 11:43:31.440569     658 log.go:172] (0xc000884210) (0xc0006e2000) Stream removed, broadcasting: 3\nI0202 11:43:31.440588     658 log.go:172] (0xc000884210) (0xc000392d20) Stream removed, broadcasting: 5\n"
Feb  2 11:43:31.449: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 11:43:31.449: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 11:43:31.449: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  2 11:44:11.492: INFO: Deleting all statefulset in ns e2e-tests-statefulset-zz6pc
Feb  2 11:44:11.499: INFO: Scaling statefulset ss to 0
Feb  2 11:44:11.534: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 11:44:11.543: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:44:11.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-zz6pc" for this suite.
Feb  2 11:44:19.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:44:19.939: INFO: namespace: e2e-tests-statefulset-zz6pc, resource: bindings, ignored listing per whitelist
Feb  2 11:44:19.956: INFO: namespace e2e-tests-statefulset-zz6pc deletion completed in 8.344744602s

• [SLOW TEST:125.129 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:44:19.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  2 11:44:20.268: INFO: Waiting up to 5m0s for pod "pod-599194ce-45b1-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-68hxg" to be "success or failure"
Feb  2 11:44:20.380: INFO: Pod "pod-599194ce-45b1-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 111.410105ms
Feb  2 11:44:22.397: INFO: Pod "pod-599194ce-45b1-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128673815s
Feb  2 11:44:24.408: INFO: Pod "pod-599194ce-45b1-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139818413s
Feb  2 11:44:26.593: INFO: Pod "pod-599194ce-45b1-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.324519465s
Feb  2 11:44:28.637: INFO: Pod "pod-599194ce-45b1-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.369021798s
Feb  2 11:44:30.653: INFO: Pod "pod-599194ce-45b1-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.384112315s
STEP: Saw pod success
Feb  2 11:44:30.653: INFO: Pod "pod-599194ce-45b1-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:44:30.659: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-599194ce-45b1-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 11:44:30.823: INFO: Waiting for pod pod-599194ce-45b1-11ea-8b99-0242ac110005 to disappear
Feb  2 11:44:30.830: INFO: Pod pod-599194ce-45b1-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:44:30.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-68hxg" for this suite.
Feb  2 11:44:36.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:44:37.140: INFO: namespace: e2e-tests-emptydir-68hxg, resource: bindings, ignored listing per whitelist
Feb  2 11:44:37.156: INFO: namespace e2e-tests-emptydir-68hxg deletion completed in 6.318458165s

• [SLOW TEST:17.200 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:44:37.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 11:44:37.378: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  2 11:44:37.407: INFO: Number of nodes with available pods: 0
Feb  2 11:44:37.407: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:44:38.429: INFO: Number of nodes with available pods: 0
Feb  2 11:44:38.429: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:44:39.638: INFO: Number of nodes with available pods: 0
Feb  2 11:44:39.638: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:44:40.434: INFO: Number of nodes with available pods: 0
Feb  2 11:44:40.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:44:41.434: INFO: Number of nodes with available pods: 0
Feb  2 11:44:41.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:44:43.584: INFO: Number of nodes with available pods: 0
Feb  2 11:44:43.584: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:44:44.816: INFO: Number of nodes with available pods: 0
Feb  2 11:44:44.817: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:44:45.430: INFO: Number of nodes with available pods: 0
Feb  2 11:44:45.430: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:44:46.439: INFO: Number of nodes with available pods: 0
Feb  2 11:44:46.439: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:44:47.428: INFO: Number of nodes with available pods: 1
Feb  2 11:44:47.428: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  2 11:44:47.471: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:44:48.502: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:44:49.492: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:44:50.509: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:44:51.496: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:44:52.522: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:44:53.596: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:44:53.596: INFO: Pod daemon-set-46qj9 is not available
Feb  2 11:44:54.505: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:44:54.505: INFO: Pod daemon-set-46qj9 is not available
Feb  2 11:44:55.494: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:44:55.494: INFO: Pod daemon-set-46qj9 is not available
Feb  2 11:44:56.493: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:44:56.493: INFO: Pod daemon-set-46qj9 is not available
Feb  2 11:44:57.497: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:44:57.497: INFO: Pod daemon-set-46qj9 is not available
Feb  2 11:44:58.656: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:44:58.656: INFO: Pod daemon-set-46qj9 is not available
Feb  2 11:44:59.498: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:44:59.498: INFO: Pod daemon-set-46qj9 is not available
Feb  2 11:45:00.498: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:45:00.498: INFO: Pod daemon-set-46qj9 is not available
Feb  2 11:45:01.488: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:45:01.488: INFO: Pod daemon-set-46qj9 is not available
Feb  2 11:45:02.535: INFO: Wrong image for pod: daemon-set-46qj9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 11:45:02.535: INFO: Pod daemon-set-46qj9 is not available
Feb  2 11:45:03.495: INFO: Pod daemon-set-bx7jj is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  2 11:45:03.554: INFO: Number of nodes with available pods: 0
Feb  2 11:45:03.554: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:45:04.616: INFO: Number of nodes with available pods: 0
Feb  2 11:45:04.616: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:45:05.576: INFO: Number of nodes with available pods: 0
Feb  2 11:45:05.576: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:45:06.599: INFO: Number of nodes with available pods: 0
Feb  2 11:45:06.600: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:45:07.886: INFO: Number of nodes with available pods: 0
Feb  2 11:45:07.886: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:45:08.661: INFO: Number of nodes with available pods: 0
Feb  2 11:45:08.662: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:45:09.627: INFO: Number of nodes with available pods: 0
Feb  2 11:45:09.627: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:45:10.606: INFO: Number of nodes with available pods: 0
Feb  2 11:45:10.606: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 11:45:11.580: INFO: Number of nodes with available pods: 1
Feb  2 11:45:11.580: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-w65sf, will wait for the garbage collector to delete the pods
Feb  2 11:45:11.714: INFO: Deleting DaemonSet.extensions daemon-set took: 17.377834ms
Feb  2 11:45:12.014: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.535477ms
Feb  2 11:45:18.864: INFO: Number of nodes with available pods: 0
Feb  2 11:45:18.864: INFO: Number of running nodes: 0, number of available pods: 0
Feb  2 11:45:18.873: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-w65sf/daemonsets","resourceVersion":"20301685"},"items":null}

Feb  2 11:45:18.878: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-w65sf/pods","resourceVersion":"20301685"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:45:18.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-w65sf" for this suite.
Feb  2 11:45:26.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:45:27.034: INFO: namespace: e2e-tests-daemonsets-w65sf, resource: bindings, ignored listing per whitelist
Feb  2 11:45:27.115: INFO: namespace e2e-tests-daemonsets-w65sf deletion completed in 8.209242452s

• [SLOW TEST:49.958 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:45:27.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-wqn9
STEP: Creating a pod to test atomic-volume-subpath
Feb  2 11:45:27.331: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wqn9" in namespace "e2e-tests-subpath-78h6d" to be "success or failure"
Feb  2 11:45:27.461: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Pending", Reason="", readiness=false. Elapsed: 130.396857ms
Feb  2 11:45:29.656: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32481716s
Feb  2 11:45:31.667: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33547198s
Feb  2 11:45:33.709: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377627708s
Feb  2 11:45:35.857: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.526041984s
Feb  2 11:45:37.883: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.551852379s
Feb  2 11:45:39.917: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.5855116s
Feb  2 11:45:42.421: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Running", Reason="", readiness=true. Elapsed: 15.089821449s
Feb  2 11:45:44.437: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Running", Reason="", readiness=false. Elapsed: 17.106303757s
Feb  2 11:45:46.527: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Running", Reason="", readiness=false. Elapsed: 19.196202078s
Feb  2 11:45:48.582: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Running", Reason="", readiness=false. Elapsed: 21.250457205s
Feb  2 11:45:50.606: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Running", Reason="", readiness=false. Elapsed: 23.274942259s
Feb  2 11:45:52.629: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Running", Reason="", readiness=false. Elapsed: 25.297928422s
Feb  2 11:45:54.650: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Running", Reason="", readiness=false. Elapsed: 27.318704929s
Feb  2 11:45:56.697: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Running", Reason="", readiness=false. Elapsed: 29.365735091s
Feb  2 11:45:58.709: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Running", Reason="", readiness=false. Elapsed: 31.377741895s
Feb  2 11:46:00.851: INFO: Pod "pod-subpath-test-projected-wqn9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.520274382s
STEP: Saw pod success
Feb  2 11:46:00.851: INFO: Pod "pod-subpath-test-projected-wqn9" satisfied condition "success or failure"
Feb  2 11:46:00.863: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-wqn9 container test-container-subpath-projected-wqn9: 
STEP: delete the pod
Feb  2 11:46:01.125: INFO: Waiting for pod pod-subpath-test-projected-wqn9 to disappear
Feb  2 11:46:01.140: INFO: Pod pod-subpath-test-projected-wqn9 no longer exists
STEP: Deleting pod pod-subpath-test-projected-wqn9
Feb  2 11:46:01.141: INFO: Deleting pod "pod-subpath-test-projected-wqn9" in namespace "e2e-tests-subpath-78h6d"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:46:01.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-78h6d" for this suite.
Feb  2 11:46:09.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:46:09.288: INFO: namespace: e2e-tests-subpath-78h6d, resource: bindings, ignored listing per whitelist
Feb  2 11:46:09.342: INFO: namespace e2e-tests-subpath-78h6d deletion completed in 8.190386864s

• [SLOW TEST:42.227 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:46:09.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-9abb7323-45b1-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 11:46:09.592: INFO: Waiting up to 5m0s for pod "pod-secrets-9abc9de7-45b1-11ea-8b99-0242ac110005" in namespace "e2e-tests-secrets-hpwfp" to be "success or failure"
Feb  2 11:46:09.603: INFO: Pod "pod-secrets-9abc9de7-45b1-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.397493ms
Feb  2 11:46:11.726: INFO: Pod "pod-secrets-9abc9de7-45b1-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133993763s
Feb  2 11:46:13.745: INFO: Pod "pod-secrets-9abc9de7-45b1-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152997848s
Feb  2 11:46:15.760: INFO: Pod "pod-secrets-9abc9de7-45b1-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167985197s
Feb  2 11:46:17.773: INFO: Pod "pod-secrets-9abc9de7-45b1-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1810168s
Feb  2 11:46:19.823: INFO: Pod "pod-secrets-9abc9de7-45b1-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.230960797s
STEP: Saw pod success
Feb  2 11:46:19.823: INFO: Pod "pod-secrets-9abc9de7-45b1-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:46:19.834: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-9abc9de7-45b1-11ea-8b99-0242ac110005 container secret-env-test: 
STEP: delete the pod
Feb  2 11:46:19.991: INFO: Waiting for pod pod-secrets-9abc9de7-45b1-11ea-8b99-0242ac110005 to disappear
Feb  2 11:46:19.998: INFO: Pod pod-secrets-9abc9de7-45b1-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:46:19.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hpwfp" for this suite.
Feb  2 11:46:26.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:46:26.165: INFO: namespace: e2e-tests-secrets-hpwfp, resource: bindings, ignored listing per whitelist
Feb  2 11:46:26.218: INFO: namespace e2e-tests-secrets-hpwfp deletion completed in 6.214038114s

• [SLOW TEST:16.876 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:46:26.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb  2 11:46:33.736: INFO: 10 pods remaining
Feb  2 11:46:33.736: INFO: 10 pods has nil DeletionTimestamp
Feb  2 11:46:33.736: INFO: 
Feb  2 11:46:35.028: INFO: 9 pods remaining
Feb  2 11:46:35.028: INFO: 0 pods has nil DeletionTimestamp
Feb  2 11:46:35.028: INFO: 
Feb  2 11:46:36.223: INFO: 1 pods remaining
Feb  2 11:46:36.223: INFO: 0 pods has nil DeletionTimestamp
Feb  2 11:46:36.223: INFO: 
STEP: Gathering metrics
W0202 11:46:37.252273       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  2 11:46:37.252: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:46:37.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-ftz8x" for this suite.
Feb  2 11:46:51.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:46:51.336: INFO: namespace: e2e-tests-gc-ftz8x, resource: bindings, ignored listing per whitelist
Feb  2 11:46:51.499: INFO: namespace e2e-tests-gc-ftz8x deletion completed in 14.241533049s

• [SLOW TEST:25.280 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:46:51.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  2 11:49:58.105: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:49:58.119: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:00.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:00.128: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:02.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:02.151: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:04.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:04.141: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:06.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:06.132: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:08.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:08.156: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:10.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:10.143: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:12.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:12.135: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:14.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:14.135: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:16.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:16.138: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:18.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:18.143: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:20.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:20.138: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:22.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:22.144: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:24.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:24.139: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:26.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:26.145: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:28.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:28.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:30.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:30.131: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:32.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:32.158: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 11:50:34.119: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 11:50:34.166: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:50:34.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qtzl8" for this suite.
Feb  2 11:50:58.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:50:58.422: INFO: namespace: e2e-tests-container-lifecycle-hook-qtzl8, resource: bindings, ignored listing per whitelist
Feb  2 11:50:58.432: INFO: namespace e2e-tests-container-lifecycle-hook-qtzl8 deletion completed in 24.255996877s

• [SLOW TEST:246.933 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:50:58.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Feb  2 11:50:59.192: INFO: Waiting up to 5m0s for pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-xk6mt" in namespace "e2e-tests-svcaccounts-7vw2b" to be "success or failure"
Feb  2 11:50:59.273: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-xk6mt": Phase="Pending", Reason="", readiness=false. Elapsed: 81.046355ms
Feb  2 11:51:01.292: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-xk6mt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099515258s
Feb  2 11:51:03.311: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-xk6mt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118821185s
Feb  2 11:51:05.327: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-xk6mt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134363145s
Feb  2 11:51:07.409: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-xk6mt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.216854706s
Feb  2 11:51:09.550: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-xk6mt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.357111352s
Feb  2 11:51:11.572: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-xk6mt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.379800343s
Feb  2 11:51:13.599: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-xk6mt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.406847032s
STEP: Saw pod success
Feb  2 11:51:13.599: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-xk6mt" satisfied condition "success or failure"
Feb  2 11:51:13.616: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-xk6mt container token-test: 
STEP: delete the pod
Feb  2 11:51:14.802: INFO: Waiting for pod pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-xk6mt to disappear
Feb  2 11:51:14.818: INFO: Pod pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-xk6mt no longer exists
STEP: Creating a pod to test consume service account root CA
Feb  2 11:51:14.833: INFO: Waiting up to 5m0s for pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-dfpvd" in namespace "e2e-tests-svcaccounts-7vw2b" to be "success or failure"
Feb  2 11:51:14.972: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-dfpvd": Phase="Pending", Reason="", readiness=false. Elapsed: 139.130794ms
Feb  2 11:51:16.981: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-dfpvd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147397431s
Feb  2 11:51:19.218: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-dfpvd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.384679178s
Feb  2 11:51:21.231: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-dfpvd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398158582s
Feb  2 11:51:23.295: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-dfpvd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.462204949s
Feb  2 11:51:25.661: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-dfpvd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.82826782s
Feb  2 11:51:27.672: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-dfpvd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.838846999s
Feb  2 11:51:29.680: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-dfpvd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.846571536s
STEP: Saw pod success
Feb  2 11:51:29.680: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-dfpvd" satisfied condition "success or failure"
Feb  2 11:51:29.683: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-dfpvd container root-ca-test: 
STEP: delete the pod
Feb  2 11:51:30.371: INFO: Waiting for pod pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-dfpvd to disappear
Feb  2 11:51:30.396: INFO: Pod pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-dfpvd no longer exists
STEP: Creating a pod to test consume service account namespace
Feb  2 11:51:30.623: INFO: Waiting up to 5m0s for pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62" in namespace "e2e-tests-svcaccounts-7vw2b" to be "success or failure"
Feb  2 11:51:30.836: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62": Phase="Pending", Reason="", readiness=false. Elapsed: 211.992197ms
Feb  2 11:51:32.858: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234727966s
Feb  2 11:51:34.897: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273314915s
Feb  2 11:51:36.919: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.295942305s
Feb  2 11:51:38.937: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313283487s
Feb  2 11:51:40.949: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62": Phase="Pending", Reason="", readiness=false. Elapsed: 10.325373626s
Feb  2 11:51:42.967: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62": Phase="Pending", Reason="", readiness=false. Elapsed: 12.343004193s
Feb  2 11:51:45.011: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62": Phase="Pending", Reason="", readiness=false. Elapsed: 14.38763269s
Feb  2 11:51:47.042: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.418728031s
STEP: Saw pod success
Feb  2 11:51:47.042: INFO: Pod "pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62" satisfied condition "success or failure"
Feb  2 11:51:47.060: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62 container namespace-test: 
STEP: delete the pod
Feb  2 11:51:47.193: INFO: Waiting for pod pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62 to disappear
Feb  2 11:51:47.203: INFO: Pod pod-service-account-47591dcb-45b2-11ea-8b99-0242ac110005-r6g62 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:51:47.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-7vw2b" for this suite.
Feb  2 11:51:55.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:51:55.441: INFO: namespace: e2e-tests-svcaccounts-7vw2b, resource: bindings, ignored listing per whitelist
Feb  2 11:51:55.499: INFO: namespace e2e-tests-svcaccounts-7vw2b deletion completed in 8.286002001s

• [SLOW TEST:57.067 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:51:55.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:52:05.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-qg4vn" for this suite.
Feb  2 11:52:59.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:53:00.108: INFO: namespace: e2e-tests-kubelet-test-qg4vn, resource: bindings, ignored listing per whitelist
Feb  2 11:53:00.141: INFO: namespace e2e-tests-kubelet-test-qg4vn deletion completed in 54.231580827s

• [SLOW TEST:64.642 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:53:00.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  2 11:53:00.444: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:53:17.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-vjp5v" for this suite.
Feb  2 11:53:25.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:53:25.554: INFO: namespace: e2e-tests-init-container-vjp5v, resource: bindings, ignored listing per whitelist
Feb  2 11:53:25.585: INFO: namespace e2e-tests-init-container-vjp5v deletion completed in 8.269327146s

• [SLOW TEST:25.444 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:53:25.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  2 11:53:25.726: INFO: Waiting up to 5m0s for pod "pod-9eb09825-45b2-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-qtkh6" to be "success or failure"
Feb  2 11:53:25.864: INFO: Pod "pod-9eb09825-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 137.722742ms
Feb  2 11:53:27.879: INFO: Pod "pod-9eb09825-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15288824s
Feb  2 11:53:29.899: INFO: Pod "pod-9eb09825-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172440692s
Feb  2 11:53:31.911: INFO: Pod "pod-9eb09825-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184486478s
Feb  2 11:53:33.988: INFO: Pod "pod-9eb09825-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.261336056s
Feb  2 11:53:36.010: INFO: Pod "pod-9eb09825-45b2-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.283632275s
STEP: Saw pod success
Feb  2 11:53:36.010: INFO: Pod "pod-9eb09825-45b2-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:53:36.018: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9eb09825-45b2-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 11:53:36.074: INFO: Waiting for pod pod-9eb09825-45b2-11ea-8b99-0242ac110005 to disappear
Feb  2 11:53:36.091: INFO: Pod pod-9eb09825-45b2-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:53:36.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qtkh6" for this suite.
Feb  2 11:53:42.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:53:42.359: INFO: namespace: e2e-tests-emptydir-qtkh6, resource: bindings, ignored listing per whitelist
Feb  2 11:53:42.408: INFO: namespace e2e-tests-emptydir-qtkh6 deletion completed in 6.249886963s

• [SLOW TEST:16.823 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:53:42.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 11:53:42.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Feb  2 11:53:42.882: INFO: stderr: ""
Feb  2 11:53:42.882: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Feb  2 11:53:42.887: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:53:42.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gztf7" for this suite.
Feb  2 11:53:48.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:53:48.973: INFO: namespace: e2e-tests-kubectl-gztf7, resource: bindings, ignored listing per whitelist
Feb  2 11:53:49.111: INFO: namespace e2e-tests-kubectl-gztf7 deletion completed in 6.213363135s

S [SKIPPING] [6.702 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Feb  2 11:53:42.887: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:53:49.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 11:53:49.303: INFO: Waiting up to 5m0s for pod "downwardapi-volume-acbe9999-45b2-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-jpddn" to be "success or failure"
Feb  2 11:53:49.309: INFO: Pod "downwardapi-volume-acbe9999-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.948672ms
Feb  2 11:53:51.328: INFO: Pod "downwardapi-volume-acbe9999-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024550631s
Feb  2 11:53:53.344: INFO: Pod "downwardapi-volume-acbe9999-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040593261s
Feb  2 11:53:56.505: INFO: Pod "downwardapi-volume-acbe9999-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.201826773s
Feb  2 11:53:58.524: INFO: Pod "downwardapi-volume-acbe9999-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.220567201s
Feb  2 11:54:00.547: INFO: Pod "downwardapi-volume-acbe9999-45b2-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.244082863s
STEP: Saw pod success
Feb  2 11:54:00.547: INFO: Pod "downwardapi-volume-acbe9999-45b2-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:54:00.563: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-acbe9999-45b2-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 11:54:00.784: INFO: Waiting for pod downwardapi-volume-acbe9999-45b2-11ea-8b99-0242ac110005 to disappear
Feb  2 11:54:00.811: INFO: Pod downwardapi-volume-acbe9999-45b2-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:54:00.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jpddn" for this suite.
Feb  2 11:54:06.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:54:07.058: INFO: namespace: e2e-tests-downward-api-jpddn, resource: bindings, ignored listing per whitelist
Feb  2 11:54:07.219: INFO: namespace e2e-tests-downward-api-jpddn deletion completed in 6.400686049s

• [SLOW TEST:18.107 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:54:07.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-b78b031d-45b2-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 11:54:07.433: INFO: Waiting up to 5m0s for pod "pod-configmaps-b78bbbad-45b2-11ea-8b99-0242ac110005" in namespace "e2e-tests-configmap-t722l" to be "success or failure"
Feb  2 11:54:07.455: INFO: Pod "pod-configmaps-b78bbbad-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.808711ms
Feb  2 11:54:09.563: INFO: Pod "pod-configmaps-b78bbbad-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129681719s
Feb  2 11:54:11.587: INFO: Pod "pod-configmaps-b78bbbad-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153112013s
Feb  2 11:54:13.665: INFO: Pod "pod-configmaps-b78bbbad-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.231083667s
Feb  2 11:54:15.761: INFO: Pod "pod-configmaps-b78bbbad-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.327105484s
Feb  2 11:54:17.788: INFO: Pod "pod-configmaps-b78bbbad-45b2-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.354652113s
STEP: Saw pod success
Feb  2 11:54:17.788: INFO: Pod "pod-configmaps-b78bbbad-45b2-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:54:17.794: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b78bbbad-45b2-11ea-8b99-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  2 11:54:17.914: INFO: Waiting for pod pod-configmaps-b78bbbad-45b2-11ea-8b99-0242ac110005 to disappear
Feb  2 11:54:17.927: INFO: Pod pod-configmaps-b78bbbad-45b2-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:54:17.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-t722l" for this suite.
Feb  2 11:54:25.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:54:26.049: INFO: namespace: e2e-tests-configmap-t722l, resource: bindings, ignored listing per whitelist
Feb  2 11:54:26.203: INFO: namespace e2e-tests-configmap-t722l deletion completed in 8.258207829s

• [SLOW TEST:18.984 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:54:26.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-c2fd1b4e-45b2-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 11:54:26.703: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c3037b25-45b2-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-vzv42" to be "success or failure"
Feb  2 11:54:26.733: INFO: Pod "pod-projected-secrets-c3037b25-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.00813ms
Feb  2 11:54:28.784: INFO: Pod "pod-projected-secrets-c3037b25-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080486298s
Feb  2 11:54:30.817: INFO: Pod "pod-projected-secrets-c3037b25-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113444032s
Feb  2 11:54:32.829: INFO: Pod "pod-projected-secrets-c3037b25-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125380649s
Feb  2 11:54:34.982: INFO: Pod "pod-projected-secrets-c3037b25-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.278391552s
Feb  2 11:54:36.994: INFO: Pod "pod-projected-secrets-c3037b25-45b2-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.290144488s
STEP: Saw pod success
Feb  2 11:54:36.994: INFO: Pod "pod-projected-secrets-c3037b25-45b2-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:54:36.999: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-c3037b25-45b2-11ea-8b99-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  2 11:54:37.141: INFO: Waiting for pod pod-projected-secrets-c3037b25-45b2-11ea-8b99-0242ac110005 to disappear
Feb  2 11:54:37.146: INFO: Pod pod-projected-secrets-c3037b25-45b2-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:54:37.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vzv42" for this suite.
Feb  2 11:54:43.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:54:43.314: INFO: namespace: e2e-tests-projected-vzv42, resource: bindings, ignored listing per whitelist
Feb  2 11:54:43.571: INFO: namespace e2e-tests-projected-vzv42 deletion completed in 6.418107392s

• [SLOW TEST:17.367 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:54:43.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 11:54:43.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-5ldh6'
Feb  2 11:54:45.911: INFO: stderr: ""
Feb  2 11:54:45.911: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Feb  2 11:54:46.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5ldh6'
Feb  2 11:54:49.895: INFO: stderr: ""
Feb  2 11:54:49.896: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:54:49.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5ldh6" for this suite.
Feb  2 11:54:58.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:54:58.072: INFO: namespace: e2e-tests-kubectl-5ldh6, resource: bindings, ignored listing per whitelist
Feb  2 11:54:58.246: INFO: namespace e2e-tests-kubectl-5ldh6 deletion completed in 8.340551996s

• [SLOW TEST:14.675 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:54:58.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-d6093d64-45b2-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 11:54:58.774: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d61aacdd-45b2-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-rflcd" to be "success or failure"
Feb  2 11:54:58.838: INFO: Pod "pod-projected-secrets-d61aacdd-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 63.341711ms
Feb  2 11:55:00.863: INFO: Pod "pod-projected-secrets-d61aacdd-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088326384s
Feb  2 11:55:02.887: INFO: Pod "pod-projected-secrets-d61aacdd-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112764657s
Feb  2 11:55:05.010: INFO: Pod "pod-projected-secrets-d61aacdd-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.235558471s
Feb  2 11:55:07.024: INFO: Pod "pod-projected-secrets-d61aacdd-45b2-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249798571s
Feb  2 11:55:09.042: INFO: Pod "pod-projected-secrets-d61aacdd-45b2-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.267360336s
STEP: Saw pod success
Feb  2 11:55:09.042: INFO: Pod "pod-projected-secrets-d61aacdd-45b2-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:55:09.049: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-d61aacdd-45b2-11ea-8b99-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  2 11:55:09.306: INFO: Waiting for pod pod-projected-secrets-d61aacdd-45b2-11ea-8b99-0242ac110005 to disappear
Feb  2 11:55:09.330: INFO: Pod pod-projected-secrets-d61aacdd-45b2-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:55:09.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rflcd" for this suite.
Feb  2 11:55:17.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:55:17.607: INFO: namespace: e2e-tests-projected-rflcd, resource: bindings, ignored listing per whitelist
Feb  2 11:55:17.652: INFO: namespace e2e-tests-projected-rflcd deletion completed in 8.313597693s

• [SLOW TEST:19.405 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:55:17.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  2 11:55:17.792: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ll5qz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ll5qz/configmaps/e2e-watch-test-configmap-a,UID:e17e6398-45b2-11ea-a994-fa163e34d433,ResourceVersion:20302921,Generation:0,CreationTimestamp:2020-02-02 11:55:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  2 11:55:17.792: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ll5qz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ll5qz/configmaps/e2e-watch-test-configmap-a,UID:e17e6398-45b2-11ea-a994-fa163e34d433,ResourceVersion:20302921,Generation:0,CreationTimestamp:2020-02-02 11:55:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  2 11:55:27.817: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ll5qz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ll5qz/configmaps/e2e-watch-test-configmap-a,UID:e17e6398-45b2-11ea-a994-fa163e34d433,ResourceVersion:20302934,Generation:0,CreationTimestamp:2020-02-02 11:55:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  2 11:55:27.818: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ll5qz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ll5qz/configmaps/e2e-watch-test-configmap-a,UID:e17e6398-45b2-11ea-a994-fa163e34d433,ResourceVersion:20302934,Generation:0,CreationTimestamp:2020-02-02 11:55:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  2 11:55:37.855: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ll5qz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ll5qz/configmaps/e2e-watch-test-configmap-a,UID:e17e6398-45b2-11ea-a994-fa163e34d433,ResourceVersion:20302947,Generation:0,CreationTimestamp:2020-02-02 11:55:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  2 11:55:37.855: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ll5qz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ll5qz/configmaps/e2e-watch-test-configmap-a,UID:e17e6398-45b2-11ea-a994-fa163e34d433,ResourceVersion:20302947,Generation:0,CreationTimestamp:2020-02-02 11:55:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  2 11:55:48.991: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ll5qz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ll5qz/configmaps/e2e-watch-test-configmap-a,UID:e17e6398-45b2-11ea-a994-fa163e34d433,ResourceVersion:20302961,Generation:0,CreationTimestamp:2020-02-02 11:55:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  2 11:55:48.991: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-ll5qz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ll5qz/configmaps/e2e-watch-test-configmap-a,UID:e17e6398-45b2-11ea-a994-fa163e34d433,ResourceVersion:20302961,Generation:0,CreationTimestamp:2020-02-02 11:55:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  2 11:55:59.020: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-ll5qz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ll5qz/configmaps/e2e-watch-test-configmap-b,UID:fa0f1640-45b2-11ea-a994-fa163e34d433,ResourceVersion:20302972,Generation:0,CreationTimestamp:2020-02-02 11:55:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  2 11:55:59.020: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-ll5qz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ll5qz/configmaps/e2e-watch-test-configmap-b,UID:fa0f1640-45b2-11ea-a994-fa163e34d433,ResourceVersion:20302972,Generation:0,CreationTimestamp:2020-02-02 11:55:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  2 11:56:09.058: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-ll5qz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ll5qz/configmaps/e2e-watch-test-configmap-b,UID:fa0f1640-45b2-11ea-a994-fa163e34d433,ResourceVersion:20302985,Generation:0,CreationTimestamp:2020-02-02 11:55:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  2 11:56:09.059: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-ll5qz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ll5qz/configmaps/e2e-watch-test-configmap-b,UID:fa0f1640-45b2-11ea-a994-fa163e34d433,ResourceVersion:20302985,Generation:0,CreationTimestamp:2020-02-02 11:55:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:56:19.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-ll5qz" for this suite.
Feb  2 11:56:25.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:56:25.241: INFO: namespace: e2e-tests-watch-ll5qz, resource: bindings, ignored listing per whitelist
Feb  2 11:56:25.364: INFO: namespace e2e-tests-watch-ll5qz deletion completed in 6.275464296s

• [SLOW TEST:67.711 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:56:25.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-09e8e283-45b3-11ea-8b99-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-09e8e238-45b3-11ea-8b99-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb  2 11:56:25.741: INFO: Waiting up to 5m0s for pod "projected-volume-09e8e0c1-45b3-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-q97cf" to be "success or failure"
Feb  2 11:56:25.763: INFO: Pod "projected-volume-09e8e0c1-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.654838ms
Feb  2 11:56:27.813: INFO: Pod "projected-volume-09e8e0c1-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072179752s
Feb  2 11:56:29.832: INFO: Pod "projected-volume-09e8e0c1-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090885352s
Feb  2 11:56:32.140: INFO: Pod "projected-volume-09e8e0c1-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.399009085s
Feb  2 11:56:34.324: INFO: Pod "projected-volume-09e8e0c1-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.583492396s
Feb  2 11:56:36.673: INFO: Pod "projected-volume-09e8e0c1-45b3-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.932557388s
STEP: Saw pod success
Feb  2 11:56:36.674: INFO: Pod "projected-volume-09e8e0c1-45b3-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:56:36.682: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-09e8e0c1-45b3-11ea-8b99-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Feb  2 11:56:37.034: INFO: Waiting for pod projected-volume-09e8e0c1-45b3-11ea-8b99-0242ac110005 to disappear
Feb  2 11:56:37.127: INFO: Pod projected-volume-09e8e0c1-45b3-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:56:37.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-q97cf" for this suite.
Feb  2 11:56:45.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:56:45.241: INFO: namespace: e2e-tests-projected-q97cf, resource: bindings, ignored listing per whitelist
Feb  2 11:56:45.421: INFO: namespace e2e-tests-projected-q97cf deletion completed in 8.283084602s

• [SLOW TEST:20.056 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:56:45.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  2 11:57:05.860: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:05.907: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:07.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:07.977: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:09.907: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:09.925: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:11.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:11.926: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:13.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:13.948: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:15.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:15.924: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:17.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:17.957: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:19.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:19.933: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:21.910: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:21.958: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:23.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:23.936: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:25.907: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:25.921: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:27.907: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:27.930: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:29.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:29.926: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:31.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:31.923: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 11:57:33.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 11:57:33.938: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:57:33.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vbtjm" for this suite.
Feb  2 11:57:58.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:57:58.263: INFO: namespace: e2e-tests-container-lifecycle-hook-vbtjm, resource: bindings, ignored listing per whitelist
Feb  2 11:57:58.298: INFO: namespace e2e-tests-container-lifecycle-hook-vbtjm deletion completed in 24.281932065s

• [SLOW TEST:72.877 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:57:58.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 11:58:28.640: INFO: Container started at 2020-02-02 11:58:05 +0000 UTC, pod became ready at 2020-02-02 11:58:28 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:58:28.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-f27dn" for this suite.
Feb  2 11:58:52.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:58:52.774: INFO: namespace: e2e-tests-container-probe-f27dn, resource: bindings, ignored listing per whitelist
Feb  2 11:58:52.834: INFO: namespace e2e-tests-container-probe-f27dn deletion completed in 24.188035933s

• [SLOW TEST:54.535 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:58:52.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  2 11:58:53.140: INFO: Waiting up to 5m0s for pod "pod-61d83220-45b3-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-r796m" to be "success or failure"
Feb  2 11:58:53.156: INFO: Pod "pod-61d83220-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.469913ms
Feb  2 11:58:55.537: INFO: Pod "pod-61d83220-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396789293s
Feb  2 11:58:57.562: INFO: Pod "pod-61d83220-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.42191472s
Feb  2 11:58:59.586: INFO: Pod "pod-61d83220-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446424298s
Feb  2 11:59:01.598: INFO: Pod "pod-61d83220-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.458334566s
Feb  2 11:59:03.616: INFO: Pod "pod-61d83220-45b3-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.47578339s
STEP: Saw pod success
Feb  2 11:59:03.616: INFO: Pod "pod-61d83220-45b3-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:59:03.630: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-61d83220-45b3-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 11:59:04.441: INFO: Waiting for pod pod-61d83220-45b3-11ea-8b99-0242ac110005 to disappear
Feb  2 11:59:04.451: INFO: Pod pod-61d83220-45b3-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:59:04.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-r796m" for this suite.
Feb  2 11:59:10.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:59:10.650: INFO: namespace: e2e-tests-emptydir-r796m, resource: bindings, ignored listing per whitelist
Feb  2 11:59:10.836: INFO: namespace e2e-tests-emptydir-r796m deletion completed in 6.37496557s

• [SLOW TEST:18.002 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:59:10.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  2 11:59:11.030: INFO: Waiting up to 5m0s for pod "downward-api-6c813422-45b3-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-xz5mz" to be "success or failure"
Feb  2 11:59:11.051: INFO: Pod "downward-api-6c813422-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.383262ms
Feb  2 11:59:13.065: INFO: Pod "downward-api-6c813422-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034961929s
Feb  2 11:59:15.079: INFO: Pod "downward-api-6c813422-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04947448s
Feb  2 11:59:17.089: INFO: Pod "downward-api-6c813422-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059118122s
Feb  2 11:59:19.156: INFO: Pod "downward-api-6c813422-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126114663s
Feb  2 11:59:21.176: INFO: Pod "downward-api-6c813422-45b3-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.145975002s
STEP: Saw pod success
Feb  2 11:59:21.176: INFO: Pod "downward-api-6c813422-45b3-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 11:59:21.181: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-6c813422-45b3-11ea-8b99-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  2 11:59:21.248: INFO: Waiting for pod downward-api-6c813422-45b3-11ea-8b99-0242ac110005 to disappear
Feb  2 11:59:21.252: INFO: Pod downward-api-6c813422-45b3-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:59:21.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xz5mz" for this suite.
Feb  2 11:59:27.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:59:27.392: INFO: namespace: e2e-tests-downward-api-xz5mz, resource: bindings, ignored listing per whitelist
Feb  2 11:59:27.468: INFO: namespace e2e-tests-downward-api-xz5mz deletion completed in 6.210939764s

• [SLOW TEST:16.632 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:59:27.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 11:59:27.721: INFO: Creating ReplicaSet my-hostname-basic-76775262-45b3-11ea-8b99-0242ac110005
Feb  2 11:59:27.787: INFO: Pod name my-hostname-basic-76775262-45b3-11ea-8b99-0242ac110005: Found 0 pods out of 1
Feb  2 11:59:33.146: INFO: Pod name my-hostname-basic-76775262-45b3-11ea-8b99-0242ac110005: Found 1 pods out of 1
Feb  2 11:59:33.146: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-76775262-45b3-11ea-8b99-0242ac110005" is running
Feb  2 11:59:39.171: INFO: Pod "my-hostname-basic-76775262-45b3-11ea-8b99-0242ac110005-kpdtz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 11:59:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 11:59:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-76775262-45b3-11ea-8b99-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 11:59:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-76775262-45b3-11ea-8b99-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 11:59:27 +0000 UTC Reason: Message:}])
Feb  2 11:59:39.172: INFO: Trying to dial the pod
Feb  2 11:59:44.236: INFO: Controller my-hostname-basic-76775262-45b3-11ea-8b99-0242ac110005: Got expected result from replica 1 [my-hostname-basic-76775262-45b3-11ea-8b99-0242ac110005-kpdtz]: "my-hostname-basic-76775262-45b3-11ea-8b99-0242ac110005-kpdtz", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 11:59:44.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-5wkq4" for this suite.
Feb  2 11:59:50.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 11:59:50.328: INFO: namespace: e2e-tests-replicaset-5wkq4, resource: bindings, ignored listing per whitelist
Feb  2 11:59:50.420: INFO: namespace e2e-tests-replicaset-5wkq4 deletion completed in 6.168662245s

• [SLOW TEST:22.951 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 11:59:50.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-68llp
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-68llp
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-68llp
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-68llp
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-68llp
Feb  2 12:00:04.924: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-68llp, name: ss-0, uid: 8c214ba9-45b3-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Feb  2 12:00:05.063: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-68llp, name: ss-0, uid: 8c214ba9-45b3-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb  2 12:00:05.079: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-68llp, name: ss-0, uid: 8c214ba9-45b3-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb  2 12:00:05.096: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-68llp
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-68llp
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-68llp and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  2 12:00:20.515: INFO: Deleting all statefulset in ns e2e-tests-statefulset-68llp
Feb  2 12:00:20.532: INFO: Scaling statefulset ss to 0
Feb  2 12:00:40.684: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 12:00:40.692: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:00:40.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-68llp" for this suite.
Feb  2 12:00:48.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:00:49.073: INFO: namespace: e2e-tests-statefulset-68llp, resource: bindings, ignored listing per whitelist
Feb  2 12:00:49.082: INFO: namespace e2e-tests-statefulset-68llp deletion completed in 8.323822639s

• [SLOW TEST:58.661 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:00:49.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  2 12:00:59.947: INFO: Successfully updated pod "pod-update-a714d20d-45b3-11ea-8b99-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Feb  2 12:00:59.982: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:00:59.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-lx6wf" for this suite.
Feb  2 12:01:41.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:01:41.397: INFO: namespace: e2e-tests-pods-lx6wf, resource: bindings, ignored listing per whitelist
Feb  2 12:01:41.420: INFO: namespace e2e-tests-pods-lx6wf deletion completed in 41.429855266s

• [SLOW TEST:52.338 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:01:41.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 12:01:41.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-jjxth'
Feb  2 12:01:41.795: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  2 12:01:41.796: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb  2 12:01:41.901: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  2 12:01:41.949: INFO: scanned /root for discovery docs: 
Feb  2 12:01:41.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-jjxth'
Feb  2 12:02:08.435: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  2 12:02:08.435: INFO: stdout: "Created e2e-test-nginx-rc-e8febfbabf18206c743bd104950c2764\nScaling up e2e-test-nginx-rc-e8febfbabf18206c743bd104950c2764 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e8febfbabf18206c743bd104950c2764 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e8febfbabf18206c743bd104950c2764 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  2 12:02:08.435: INFO: stdout: "Created e2e-test-nginx-rc-e8febfbabf18206c743bd104950c2764\nScaling up e2e-test-nginx-rc-e8febfbabf18206c743bd104950c2764 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e8febfbabf18206c743bd104950c2764 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e8febfbabf18206c743bd104950c2764 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  2 12:02:08.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jjxth'
Feb  2 12:02:08.637: INFO: stderr: ""
Feb  2 12:02:08.637: INFO: stdout: "e2e-test-nginx-rc-e8febfbabf18206c743bd104950c2764-mgssk e2e-test-nginx-rc-h8jnd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  2 12:02:13.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jjxth'
Feb  2 12:02:13.860: INFO: stderr: ""
Feb  2 12:02:13.860: INFO: stdout: "e2e-test-nginx-rc-e8febfbabf18206c743bd104950c2764-mgssk "
Feb  2 12:02:13.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e8febfbabf18206c743bd104950c2764-mgssk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jjxth'
Feb  2 12:02:14.119: INFO: stderr: ""
Feb  2 12:02:14.119: INFO: stdout: "true"
Feb  2 12:02:14.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e8febfbabf18206c743bd104950c2764-mgssk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jjxth'
Feb  2 12:02:14.235: INFO: stderr: ""
Feb  2 12:02:14.235: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  2 12:02:14.235: INFO: e2e-test-nginx-rc-e8febfbabf18206c743bd104950c2764-mgssk is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Feb  2 12:02:14.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jjxth'
Feb  2 12:02:14.398: INFO: stderr: ""
Feb  2 12:02:14.398: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:02:14.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jjxth" for this suite.
Feb  2 12:02:38.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:02:38.638: INFO: namespace: e2e-tests-kubectl-jjxth, resource: bindings, ignored listing per whitelist
Feb  2 12:02:38.874: INFO: namespace e2e-tests-kubectl-jjxth deletion completed in 24.469244195s

• [SLOW TEST:57.454 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:02:38.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Feb  2 12:02:39.146: INFO: Waiting up to 5m0s for pod "client-containers-e884405e-45b3-11ea-8b99-0242ac110005" in namespace "e2e-tests-containers-ztklc" to be "success or failure"
Feb  2 12:02:39.160: INFO: Pod "client-containers-e884405e-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.293983ms
Feb  2 12:02:41.182: INFO: Pod "client-containers-e884405e-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036752457s
Feb  2 12:02:43.204: INFO: Pod "client-containers-e884405e-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057984852s
Feb  2 12:02:45.214: INFO: Pod "client-containers-e884405e-45b3-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068519902s
Feb  2 12:02:47.230: INFO: Pod "client-containers-e884405e-45b3-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08423934s
STEP: Saw pod success
Feb  2 12:02:47.230: INFO: Pod "client-containers-e884405e-45b3-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:02:47.233: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-e884405e-45b3-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 12:02:47.391: INFO: Waiting for pod client-containers-e884405e-45b3-11ea-8b99-0242ac110005 to disappear
Feb  2 12:02:47.405: INFO: Pod client-containers-e884405e-45b3-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:02:47.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-ztklc" for this suite.
Feb  2 12:02:53.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:02:53.562: INFO: namespace: e2e-tests-containers-ztklc, resource: bindings, ignored listing per whitelist
Feb  2 12:02:53.564: INFO: namespace e2e-tests-containers-ztklc deletion completed in 6.133442993s

• [SLOW TEST:14.689 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:02:53.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-t24m2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-t24m2 to expose endpoints map[]
Feb  2 12:02:53.979: INFO: Get endpoints failed (2.629718ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb  2 12:02:54.988: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-t24m2 exposes endpoints map[] (1.011979736s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-t24m2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-t24m2 to expose endpoints map[pod1:[80]]
Feb  2 12:02:59.271: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.26538992s elapsed, will retry)
Feb  2 12:03:02.360: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-t24m2 exposes endpoints map[pod1:[80]] (7.354581042s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-t24m2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-t24m2 to expose endpoints map[pod1:[80] pod2:[80]]
Feb  2 12:03:07.125: INFO: Unexpected endpoints: found map[f2031f4b-45b3-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.748801373s elapsed, will retry)
Feb  2 12:03:09.644: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-t24m2 exposes endpoints map[pod1:[80] pod2:[80]] (7.268074613s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-t24m2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-t24m2 to expose endpoints map[pod2:[80]]
Feb  2 12:03:10.734: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-t24m2 exposes endpoints map[pod2:[80]] (1.083535261s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-t24m2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-t24m2 to expose endpoints map[]
Feb  2 12:03:11.780: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-t24m2 exposes endpoints map[] (1.026596558s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:03:13.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-t24m2" for this suite.
Feb  2 12:03:37.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:03:37.798: INFO: namespace: e2e-tests-services-t24m2, resource: bindings, ignored listing per whitelist
Feb  2 12:03:37.932: INFO: namespace e2e-tests-services-t24m2 deletion completed in 24.56196029s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:44.368 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:03:37.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  2 12:03:38.163: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  2 12:03:38.174: INFO: Waiting for terminating namespaces to be deleted...
Feb  2 12:03:38.180: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  2 12:03:38.195: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  2 12:03:38.195: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  2 12:03:38.195: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  2 12:03:38.195: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  2 12:03:38.195: INFO: 	Container weave ready: true, restart count 0
Feb  2 12:03:38.195: INFO: 	Container weave-npc ready: true, restart count 0
Feb  2 12:03:38.195: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  2 12:03:38.195: INFO: 	Container coredns ready: true, restart count 0
Feb  2 12:03:38.195: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  2 12:03:38.195: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  2 12:03:38.195: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  2 12:03:38.195: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  2 12:03:38.195: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ef9444fb379036], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:03:39.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-lt6vw" for this suite.
Feb  2 12:03:45.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:03:45.377: INFO: namespace: e2e-tests-sched-pred-lt6vw, resource: bindings, ignored listing per whitelist
Feb  2 12:03:45.543: INFO: namespace e2e-tests-sched-pred-lt6vw deletion completed in 6.258810775s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.610 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:03:45.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb  2 12:03:45.704: INFO: namespace e2e-tests-kubectl-5rm8z
Feb  2 12:03:45.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5rm8z'
Feb  2 12:03:46.144: INFO: stderr: ""
Feb  2 12:03:46.144: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  2 12:03:47.285: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:03:47.285: INFO: Found 0 / 1
Feb  2 12:03:48.268: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:03:48.269: INFO: Found 0 / 1
Feb  2 12:03:49.329: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:03:49.329: INFO: Found 0 / 1
Feb  2 12:03:50.153: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:03:50.153: INFO: Found 0 / 1
Feb  2 12:03:51.167: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:03:51.168: INFO: Found 0 / 1
Feb  2 12:03:53.324: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:03:53.324: INFO: Found 0 / 1
Feb  2 12:03:54.241: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:03:54.241: INFO: Found 0 / 1
Feb  2 12:03:55.157: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:03:55.157: INFO: Found 0 / 1
Feb  2 12:03:56.160: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:03:56.160: INFO: Found 0 / 1
Feb  2 12:03:57.163: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:03:57.163: INFO: Found 1 / 1
Feb  2 12:03:57.163: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  2 12:03:57.171: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:03:57.171: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  2 12:03:57.171: INFO: wait on redis-master startup in e2e-tests-kubectl-5rm8z 
Feb  2 12:03:57.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-622jc redis-master --namespace=e2e-tests-kubectl-5rm8z'
Feb  2 12:03:57.452: INFO: stderr: ""
Feb  2 12:03:57.452: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Feb 12:03:55.036 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Feb 12:03:55.037 # Server started, Redis version 3.2.12\n1:M 02 Feb 12:03:55.037 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Feb 12:03:55.039 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb  2 12:03:57.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-5rm8z'
Feb  2 12:03:57.669: INFO: stderr: ""
Feb  2 12:03:57.669: INFO: stdout: "service/rm2 exposed\n"
Feb  2 12:03:57.706: INFO: Service rm2 in namespace e2e-tests-kubectl-5rm8z found.
STEP: exposing service
Feb  2 12:03:59.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-5rm8z'
Feb  2 12:04:00.018: INFO: stderr: ""
Feb  2 12:04:00.018: INFO: stdout: "service/rm3 exposed\n"
Feb  2 12:04:00.113: INFO: Service rm3 in namespace e2e-tests-kubectl-5rm8z found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:04:02.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5rm8z" for this suite.
Feb  2 12:04:26.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:04:26.348: INFO: namespace: e2e-tests-kubectl-5rm8z, resource: bindings, ignored listing per whitelist
Feb  2 12:04:26.354: INFO: namespace e2e-tests-kubectl-5rm8z deletion completed in 24.212872199s

• [SLOW TEST:40.810 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:04:26.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  2 12:04:26.752: INFO: Waiting up to 5m0s for pod "pod-289f1af1-45b4-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-76v42" to be "success or failure"
Feb  2 12:04:26.777: INFO: Pod "pod-289f1af1-45b4-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.536944ms
Feb  2 12:04:28.905: INFO: Pod "pod-289f1af1-45b4-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152930222s
Feb  2 12:04:30.939: INFO: Pod "pod-289f1af1-45b4-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18677743s
Feb  2 12:04:32.953: INFO: Pod "pod-289f1af1-45b4-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200599682s
Feb  2 12:04:34.993: INFO: Pod "pod-289f1af1-45b4-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.240057349s
STEP: Saw pod success
Feb  2 12:04:34.993: INFO: Pod "pod-289f1af1-45b4-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:04:35.009: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-289f1af1-45b4-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 12:04:35.215: INFO: Waiting for pod pod-289f1af1-45b4-11ea-8b99-0242ac110005 to disappear
Feb  2 12:04:35.241: INFO: Pod pod-289f1af1-45b4-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:04:35.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-76v42" for this suite.
Feb  2 12:04:41.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:04:41.504: INFO: namespace: e2e-tests-emptydir-76v42, resource: bindings, ignored listing per whitelist
Feb  2 12:04:41.584: INFO: namespace e2e-tests-emptydir-76v42 deletion completed in 6.318909906s

• [SLOW TEST:15.229 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:04:41.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-31a97ebc-45b4-11ea-8b99-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-31a97ebc-45b4-11ea-8b99-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:05:53.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vkq7k" for this suite.
Feb  2 12:06:18.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:06:18.264: INFO: namespace: e2e-tests-configmap-vkq7k, resource: bindings, ignored listing per whitelist
Feb  2 12:06:18.419: INFO: namespace e2e-tests-configmap-vkq7k deletion completed in 24.491329529s

• [SLOW TEST:96.835 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:06:18.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  2 12:06:28.911: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-6b84343e-45b4-11ea-8b99-0242ac110005,GenerateName:,Namespace:e2e-tests-events-l6vb4,SelfLink:/api/v1/namespaces/e2e-tests-events-l6vb4/pods/send-events-6b84343e-45b4-11ea-8b99-0242ac110005,UID:6b8566c7-45b4-11ea-a994-fa163e34d433,ResourceVersion:20304359,Generation:0,CreationTimestamp:2020-02-02 12:06:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 847761703,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dllx8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dllx8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-dllx8 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00164cef0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00164cf10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:06:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:06:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:06:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:06:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-02 12:06:18 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-02 12:06:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://d105ebe8ad22367e266d2a2afca43ae25dc24bc909aaa3e25f002e4477c27e90}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb  2 12:06:30.931: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  2 12:06:32.956: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:06:33.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-l6vb4" for this suite.
Feb  2 12:07:13.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:07:13.205: INFO: namespace: e2e-tests-events-l6vb4, resource: bindings, ignored listing per whitelist
Feb  2 12:07:13.284: INFO: namespace e2e-tests-events-l6vb4 deletion completed in 40.235209459s

• [SLOW TEST:54.865 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:07:13.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-8c18f64f-45b4-11ea-8b99-0242ac110005
Feb  2 12:07:13.536: INFO: Pod name my-hostname-basic-8c18f64f-45b4-11ea-8b99-0242ac110005: Found 0 pods out of 1
Feb  2 12:07:19.000: INFO: Pod name my-hostname-basic-8c18f64f-45b4-11ea-8b99-0242ac110005: Found 1 pods out of 1
Feb  2 12:07:19.001: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-8c18f64f-45b4-11ea-8b99-0242ac110005" are running
Feb  2 12:07:23.048: INFO: Pod "my-hostname-basic-8c18f64f-45b4-11ea-8b99-0242ac110005-l5thd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 12:07:13 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 12:07:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8c18f64f-45b4-11ea-8b99-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 12:07:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8c18f64f-45b4-11ea-8b99-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 12:07:13 +0000 UTC Reason: Message:}])
Feb  2 12:07:23.049: INFO: Trying to dial the pod
Feb  2 12:07:28.122: INFO: Controller my-hostname-basic-8c18f64f-45b4-11ea-8b99-0242ac110005: Got expected result from replica 1 [my-hostname-basic-8c18f64f-45b4-11ea-8b99-0242ac110005-l5thd]: "my-hostname-basic-8c18f64f-45b4-11ea-8b99-0242ac110005-l5thd", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:07:28.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-c562f" for this suite.
Feb  2 12:07:36.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:07:36.214: INFO: namespace: e2e-tests-replication-controller-c562f, resource: bindings, ignored listing per whitelist
Feb  2 12:07:36.383: INFO: namespace e2e-tests-replication-controller-c562f deletion completed in 8.248408111s

• [SLOW TEST:23.098 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:07:36.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-cxbrq
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-cxbrq
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-cxbrq
Feb  2 12:07:37.536: INFO: Found 0 stateful pods, waiting for 1
Feb  2 12:07:47.554: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Feb  2 12:07:57.553: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  2 12:07:57.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 12:07:58.179: INFO: stderr: "I0202 12:07:57.794988     990 log.go:172] (0xc0006f2370) (0xc000716640) Create stream\nI0202 12:07:57.795392     990 log.go:172] (0xc0006f2370) (0xc000716640) Stream added, broadcasting: 1\nI0202 12:07:57.802658     990 log.go:172] (0xc0006f2370) Reply frame received for 1\nI0202 12:07:57.802853     990 log.go:172] (0xc0006f2370) (0xc0005a6f00) Create stream\nI0202 12:07:57.802880     990 log.go:172] (0xc0006f2370) (0xc0005a6f00) Stream added, broadcasting: 3\nI0202 12:07:57.804968     990 log.go:172] (0xc0006f2370) Reply frame received for 3\nI0202 12:07:57.805042     990 log.go:172] (0xc0006f2370) (0xc0007166e0) Create stream\nI0202 12:07:57.805058     990 log.go:172] (0xc0006f2370) (0xc0007166e0) Stream added, broadcasting: 5\nI0202 12:07:57.807520     990 log.go:172] (0xc0006f2370) Reply frame received for 5\nI0202 12:07:58.035588     990 log.go:172] (0xc0006f2370) Data frame received for 3\nI0202 12:07:58.035667     990 log.go:172] (0xc0005a6f00) (3) Data frame handling\nI0202 12:07:58.035687     990 log.go:172] (0xc0005a6f00) (3) Data frame sent\nI0202 12:07:58.170862     990 log.go:172] (0xc0006f2370) (0xc0005a6f00) Stream removed, broadcasting: 3\nI0202 12:07:58.171060     990 log.go:172] (0xc0006f2370) Data frame received for 1\nI0202 12:07:58.171097     990 log.go:172] (0xc000716640) (1) Data frame handling\nI0202 12:07:58.171139     990 log.go:172] (0xc000716640) (1) Data frame sent\nI0202 12:07:58.171154     990 log.go:172] (0xc0006f2370) (0xc000716640) Stream removed, broadcasting: 1\nI0202 12:07:58.171186     990 log.go:172] (0xc0006f2370) (0xc0007166e0) Stream removed, broadcasting: 5\nI0202 12:07:58.171376     990 log.go:172] (0xc0006f2370) Go away received\nI0202 12:07:58.171783     990 log.go:172] (0xc0006f2370) (0xc000716640) Stream removed, broadcasting: 1\nI0202 12:07:58.171793     990 log.go:172] (0xc0006f2370) (0xc0005a6f00) Stream removed, broadcasting: 3\nI0202 12:07:58.171797     990 log.go:172] (0xc0006f2370) (0xc0007166e0) Stream removed, broadcasting: 5\n"
Feb  2 12:07:58.179: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 12:07:58.179: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 12:07:58.215: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  2 12:08:08.231: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  2 12:08:08.231: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 12:08:08.270: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  2 12:08:08.270: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  }]
Feb  2 12:08:08.271: INFO: 
Feb  2 12:08:08.271: INFO: StatefulSet ss has not reached scale 3, at 1
Feb  2 12:08:10.009: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988793374s
Feb  2 12:08:11.155: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.250364056s
Feb  2 12:08:12.178: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.104683633s
Feb  2 12:08:13.229: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.081307896s
Feb  2 12:08:14.253: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.030506976s
Feb  2 12:08:16.172: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.005716036s
Feb  2 12:08:17.281: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.087714066s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-cxbrq
Feb  2 12:08:18.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:08:19.527: INFO: stderr: "I0202 12:08:18.806755    1012 log.go:172] (0xc00013c6e0) (0xc000653220) Create stream\nI0202 12:08:18.807368    1012 log.go:172] (0xc00013c6e0) (0xc000653220) Stream added, broadcasting: 1\nI0202 12:08:18.820045    1012 log.go:172] (0xc00013c6e0) Reply frame received for 1\nI0202 12:08:18.820622    1012 log.go:172] (0xc00013c6e0) (0xc0006dc000) Create stream\nI0202 12:08:18.820739    1012 log.go:172] (0xc00013c6e0) (0xc0006dc000) Stream added, broadcasting: 3\nI0202 12:08:18.827091    1012 log.go:172] (0xc00013c6e0) Reply frame received for 3\nI0202 12:08:18.827231    1012 log.go:172] (0xc00013c6e0) (0xc0006532c0) Create stream\nI0202 12:08:18.827252    1012 log.go:172] (0xc00013c6e0) (0xc0006532c0) Stream added, broadcasting: 5\nI0202 12:08:18.834784    1012 log.go:172] (0xc00013c6e0) Reply frame received for 5\nI0202 12:08:19.294116    1012 log.go:172] (0xc00013c6e0) Data frame received for 3\nI0202 12:08:19.294212    1012 log.go:172] (0xc0006dc000) (3) Data frame handling\nI0202 12:08:19.294236    1012 log.go:172] (0xc0006dc000) (3) Data frame sent\nI0202 12:08:19.516178    1012 log.go:172] (0xc00013c6e0) Data frame received for 1\nI0202 12:08:19.516385    1012 log.go:172] (0xc00013c6e0) (0xc0006532c0) Stream removed, broadcasting: 5\nI0202 12:08:19.516457    1012 log.go:172] (0xc000653220) (1) Data frame handling\nI0202 12:08:19.516473    1012 log.go:172] (0xc000653220) (1) Data frame sent\nI0202 12:08:19.516497    1012 log.go:172] (0xc00013c6e0) (0xc0006dc000) Stream removed, broadcasting: 3\nI0202 12:08:19.516516    1012 log.go:172] (0xc00013c6e0) (0xc000653220) Stream removed, broadcasting: 1\nI0202 12:08:19.516530    1012 log.go:172] (0xc00013c6e0) Go away received\nI0202 12:08:19.517297    1012 log.go:172] (0xc00013c6e0) (0xc000653220) Stream removed, broadcasting: 1\nI0202 12:08:19.517309    1012 log.go:172] (0xc00013c6e0) (0xc0006dc000) Stream removed, broadcasting: 3\nI0202 12:08:19.517315    1012 log.go:172] (0xc00013c6e0) (0xc0006532c0) Stream removed, broadcasting: 5\n"
Feb  2 12:08:19.527: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 12:08:19.527: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 12:08:19.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:08:19.738: INFO: rc: 1
Feb  2 12:08:19.739: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00237c390 exit status 1   true [0xc0011b4520 0xc0011b4538 0xc0011b4550] [0xc0011b4520 0xc0011b4538 0xc0011b4550] [0xc0011b4530 0xc0011b4548] [0x935700 0x935700] 0xc002064360 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb  2 12:08:29.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:08:30.331: INFO: stderr: "I0202 12:08:30.045912    1054 log.go:172] (0xc000138630) (0xc0006615e0) Create stream\nI0202 12:08:30.046112    1054 log.go:172] (0xc000138630) (0xc0006615e0) Stream added, broadcasting: 1\nI0202 12:08:30.051584    1054 log.go:172] (0xc000138630) Reply frame received for 1\nI0202 12:08:30.051639    1054 log.go:172] (0xc000138630) (0xc00027e000) Create stream\nI0202 12:08:30.051650    1054 log.go:172] (0xc000138630) (0xc00027e000) Stream added, broadcasting: 3\nI0202 12:08:30.052587    1054 log.go:172] (0xc000138630) Reply frame received for 3\nI0202 12:08:30.052617    1054 log.go:172] (0xc000138630) (0xc000661680) Create stream\nI0202 12:08:30.052627    1054 log.go:172] (0xc000138630) (0xc000661680) Stream added, broadcasting: 5\nI0202 12:08:30.053819    1054 log.go:172] (0xc000138630) Reply frame received for 5\nI0202 12:08:30.208080    1054 log.go:172] (0xc000138630) Data frame received for 3\nI0202 12:08:30.208232    1054 log.go:172] (0xc00027e000) (3) Data frame handling\nI0202 12:08:30.208261    1054 log.go:172] (0xc00027e000) (3) Data frame sent\nI0202 12:08:30.208284    1054 log.go:172] (0xc000138630) Data frame received for 5\nI0202 12:08:30.208317    1054 log.go:172] (0xc000661680) (5) Data frame handling\nI0202 12:08:30.208334    1054 log.go:172] (0xc000661680) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0202 12:08:30.318185    1054 log.go:172] (0xc000138630) Data frame received for 1\nI0202 12:08:30.318345    1054 log.go:172] (0xc000138630) (0xc000661680) Stream removed, broadcasting: 5\nI0202 12:08:30.318414    1054 log.go:172] (0xc0006615e0) (1) Data frame handling\nI0202 12:08:30.318436    1054 log.go:172] (0xc0006615e0) (1) Data frame sent\nI0202 12:08:30.318487    1054 log.go:172] (0xc000138630) (0xc00027e000) Stream removed, broadcasting: 3\nI0202 12:08:30.318516    1054 log.go:172] (0xc000138630) (0xc0006615e0) Stream removed, broadcasting: 1\nI0202 12:08:30.318574    1054 log.go:172] (0xc000138630) Go away received\nI0202 12:08:30.319917    1054 log.go:172] (0xc000138630) (0xc0006615e0) Stream removed, broadcasting: 1\nI0202 12:08:30.320226    1054 log.go:172] (0xc000138630) (0xc00027e000) Stream removed, broadcasting: 3\nI0202 12:08:30.320250    1054 log.go:172] (0xc000138630) (0xc000661680) Stream removed, broadcasting: 5\n"
Feb  2 12:08:30.331: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 12:08:30.331: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 12:08:30.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:08:31.094: INFO: stderr: "I0202 12:08:30.646763    1076 log.go:172] (0xc00013a6e0) (0xc0005a7400) Create stream\nI0202 12:08:30.647177    1076 log.go:172] (0xc00013a6e0) (0xc0005a7400) Stream added, broadcasting: 1\nI0202 12:08:30.653595    1076 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0202 12:08:30.653665    1076 log.go:172] (0xc00013a6e0) (0xc0003d4000) Create stream\nI0202 12:08:30.653696    1076 log.go:172] (0xc00013a6e0) (0xc0003d4000) Stream added, broadcasting: 3\nI0202 12:08:30.654641    1076 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0202 12:08:30.654662    1076 log.go:172] (0xc00013a6e0) (0xc0005a74a0) Create stream\nI0202 12:08:30.654667    1076 log.go:172] (0xc00013a6e0) (0xc0005a74a0) Stream added, broadcasting: 5\nI0202 12:08:30.657431    1076 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0202 12:08:30.913331    1076 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0202 12:08:30.913570    1076 log.go:172] (0xc0003d4000) (3) Data frame handling\nI0202 12:08:30.913614    1076 log.go:172] (0xc0003d4000) (3) Data frame sent\nI0202 12:08:30.913675    1076 log.go:172] (0xc00013a6e0) Data frame received for 5\nI0202 12:08:30.913697    1076 log.go:172] (0xc0005a74a0) (5) Data frame handling\nI0202 12:08:30.913722    1076 log.go:172] (0xc0005a74a0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0202 12:08:31.087242    1076 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0202 12:08:31.087348    1076 log.go:172] (0xc00013a6e0) (0xc0005a74a0) Stream removed, broadcasting: 5\nI0202 12:08:31.087483    1076 log.go:172] (0xc0005a7400) (1) Data frame handling\nI0202 12:08:31.087517    1076 log.go:172] (0xc0005a7400) (1) Data frame sent\nI0202 12:08:31.087538    1076 log.go:172] (0xc00013a6e0) (0xc0003d4000) Stream removed, broadcasting: 3\nI0202 12:08:31.087573    1076 log.go:172] (0xc00013a6e0) (0xc0005a7400) Stream removed, broadcasting: 1\nI0202 12:08:31.087584    1076 log.go:172] (0xc00013a6e0) Go away received\nI0202 12:08:31.088307    1076 log.go:172] (0xc00013a6e0) (0xc0005a7400) Stream removed, broadcasting: 1\nI0202 12:08:31.088317    1076 log.go:172] (0xc00013a6e0) (0xc0003d4000) Stream removed, broadcasting: 3\nI0202 12:08:31.088322    1076 log.go:172] (0xc00013a6e0) (0xc0005a74a0) Stream removed, broadcasting: 5\n"
Feb  2 12:08:31.094: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 12:08:31.094: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 12:08:31.103: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 12:08:31.103: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 12:08:31.103: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  2 12:08:31.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 12:08:31.472: INFO: stderr: "I0202 12:08:31.248842    1099 log.go:172] (0xc0006e6370) (0xc000708640) Create stream\nI0202 12:08:31.249073    1099 log.go:172] (0xc0006e6370) (0xc000708640) Stream added, broadcasting: 1\nI0202 12:08:31.253835    1099 log.go:172] (0xc0006e6370) Reply frame received for 1\nI0202 12:08:31.253865    1099 log.go:172] (0xc0006e6370) (0xc0007cab40) Create stream\nI0202 12:08:31.253872    1099 log.go:172] (0xc0006e6370) (0xc0007cab40) Stream added, broadcasting: 3\nI0202 12:08:31.254718    1099 log.go:172] (0xc0006e6370) Reply frame received for 3\nI0202 12:08:31.254738    1099 log.go:172] (0xc0006e6370) (0xc0007086e0) Create stream\nI0202 12:08:31.254747    1099 log.go:172] (0xc0006e6370) (0xc0007086e0) Stream added, broadcasting: 5\nI0202 12:08:31.255702    1099 log.go:172] (0xc0006e6370) Reply frame received for 5\nI0202 12:08:31.358525    1099 log.go:172] (0xc0006e6370) Data frame received for 3\nI0202 12:08:31.358663    1099 log.go:172] (0xc0007cab40) (3) Data frame handling\nI0202 12:08:31.358703    1099 log.go:172] (0xc0007cab40) (3) Data frame sent\nI0202 12:08:31.462559    1099 log.go:172] (0xc0006e6370) Data frame received for 1\nI0202 12:08:31.462651    1099 log.go:172] (0xc000708640) (1) Data frame handling\nI0202 12:08:31.462676    1099 log.go:172] (0xc000708640) (1) Data frame sent\nI0202 12:08:31.463073    1099 log.go:172] (0xc0006e6370) (0xc0007086e0) Stream removed, broadcasting: 5\nI0202 12:08:31.463399    1099 log.go:172] (0xc0006e6370) (0xc000708640) Stream removed, broadcasting: 1\nI0202 12:08:31.463559    1099 log.go:172] (0xc0006e6370) (0xc0007cab40) Stream removed, broadcasting: 3\nI0202 12:08:31.463657    1099 log.go:172] (0xc0006e6370) Go away received\nI0202 12:08:31.464922    1099 log.go:172] (0xc0006e6370) (0xc000708640) Stream removed, broadcasting: 1\nI0202 12:08:31.464946    1099 log.go:172] (0xc0006e6370) (0xc0007cab40) Stream removed, broadcasting: 3\nI0202 12:08:31.464957    1099 log.go:172] (0xc0006e6370) (0xc0007086e0) Stream removed, broadcasting: 5\n"
Feb  2 12:08:31.472: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 12:08:31.472: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 12:08:31.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 12:08:31.995: INFO: stderr: "I0202 12:08:31.690005    1121 log.go:172] (0xc00013a630) (0xc00062f360) Create stream\nI0202 12:08:31.690199    1121 log.go:172] (0xc00013a630) (0xc00062f360) Stream added, broadcasting: 1\nI0202 12:08:31.693483    1121 log.go:172] (0xc00013a630) Reply frame received for 1\nI0202 12:08:31.693517    1121 log.go:172] (0xc00013a630) (0xc0004f8000) Create stream\nI0202 12:08:31.693528    1121 log.go:172] (0xc00013a630) (0xc0004f8000) Stream added, broadcasting: 3\nI0202 12:08:31.694386    1121 log.go:172] (0xc00013a630) Reply frame received for 3\nI0202 12:08:31.694421    1121 log.go:172] (0xc00013a630) (0xc00041e000) Create stream\nI0202 12:08:31.694435    1121 log.go:172] (0xc00013a630) (0xc00041e000) Stream added, broadcasting: 5\nI0202 12:08:31.695201    1121 log.go:172] (0xc00013a630) Reply frame received for 5\nI0202 12:08:31.821218    1121 log.go:172] (0xc00013a630) Data frame received for 3\nI0202 12:08:31.821662    1121 log.go:172] (0xc0004f8000) (3) Data frame handling\nI0202 12:08:31.821720    1121 log.go:172] (0xc0004f8000) (3) Data frame sent\nI0202 12:08:31.987059    1121 log.go:172] (0xc00013a630) Data frame received for 1\nI0202 12:08:31.987212    1121 log.go:172] (0xc00013a630) (0xc0004f8000) Stream removed, broadcasting: 3\nI0202 12:08:31.987314    1121 log.go:172] (0xc00062f360) (1) Data frame handling\nI0202 12:08:31.987333    1121 log.go:172] (0xc00062f360) (1) Data frame sent\nI0202 12:08:31.987363    1121 log.go:172] (0xc00013a630) (0xc00041e000) Stream removed, broadcasting: 5\nI0202 12:08:31.987390    1121 log.go:172] (0xc00013a630) (0xc00062f360) Stream removed, broadcasting: 1\nI0202 12:08:31.987414    1121 log.go:172] (0xc00013a630) Go away received\nI0202 12:08:31.987837    1121 log.go:172] (0xc00013a630) (0xc00062f360) Stream removed, broadcasting: 1\nI0202 12:08:31.987848    1121 log.go:172] (0xc00013a630) (0xc0004f8000) Stream removed, broadcasting: 3\nI0202 12:08:31.987857    1121 log.go:172] (0xc00013a630) (0xc00041e000) Stream removed, broadcasting: 5\n"
Feb  2 12:08:31.995: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 12:08:31.995: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 12:08:31.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 12:08:32.618: INFO: stderr: "I0202 12:08:32.276165    1144 log.go:172] (0xc0006ec2c0) (0xc0000232c0) Create stream\nI0202 12:08:32.276392    1144 log.go:172] (0xc0006ec2c0) (0xc0000232c0) Stream added, broadcasting: 1\nI0202 12:08:32.285108    1144 log.go:172] (0xc0006ec2c0) Reply frame received for 1\nI0202 12:08:32.285141    1144 log.go:172] (0xc0006ec2c0) (0xc00040c000) Create stream\nI0202 12:08:32.285149    1144 log.go:172] (0xc0006ec2c0) (0xc00040c000) Stream added, broadcasting: 3\nI0202 12:08:32.286381    1144 log.go:172] (0xc0006ec2c0) Reply frame received for 3\nI0202 12:08:32.286404    1144 log.go:172] (0xc0006ec2c0) (0xc0005a4000) Create stream\nI0202 12:08:32.286413    1144 log.go:172] (0xc0006ec2c0) (0xc0005a4000) Stream added, broadcasting: 5\nI0202 12:08:32.288136    1144 log.go:172] (0xc0006ec2c0) Reply frame received for 5\nI0202 12:08:32.407385    1144 log.go:172] (0xc0006ec2c0) Data frame received for 3\nI0202 12:08:32.407469    1144 log.go:172] (0xc00040c000) (3) Data frame handling\nI0202 12:08:32.407492    1144 log.go:172] (0xc00040c000) (3) Data frame sent\nI0202 12:08:32.609363    1144 log.go:172] (0xc0006ec2c0) Data frame received for 1\nI0202 12:08:32.609518    1144 log.go:172] (0xc0006ec2c0) (0xc00040c000) Stream removed, broadcasting: 3\nI0202 12:08:32.609593    1144 log.go:172] (0xc0000232c0) (1) Data frame handling\nI0202 12:08:32.609625    1144 log.go:172] (0xc0006ec2c0) (0xc0005a4000) Stream removed, broadcasting: 5\nI0202 12:08:32.609660    1144 log.go:172] (0xc0000232c0) (1) Data frame sent\nI0202 12:08:32.609683    1144 log.go:172] (0xc0006ec2c0) (0xc0000232c0) Stream removed, broadcasting: 1\nI0202 12:08:32.609708    1144 log.go:172] (0xc0006ec2c0) Go away received\nI0202 12:08:32.610254    1144 log.go:172] (0xc0006ec2c0) (0xc0000232c0) Stream removed, broadcasting: 1\nI0202 12:08:32.610275    1144 log.go:172] (0xc0006ec2c0) (0xc00040c000) Stream removed, broadcasting: 3\nI0202 12:08:32.610290    1144 log.go:172] (0xc0006ec2c0) (0xc0005a4000) Stream removed, broadcasting: 5\n"
Feb  2 12:08:32.618: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 12:08:32.618: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 12:08:32.618: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 12:08:32.632: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb  2 12:08:42.695: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  2 12:08:42.695: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  2 12:08:42.695: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  2 12:08:42.765: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  2 12:08:42.765: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  }]
Feb  2 12:08:42.765: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:42.765: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:42.765: INFO: 
Feb  2 12:08:42.765: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 12:08:44.433: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  2 12:08:44.433: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  }]
Feb  2 12:08:44.433: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:44.433: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:44.433: INFO: 
Feb  2 12:08:44.433: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 12:08:45.794: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  2 12:08:45.794: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  }]
Feb  2 12:08:45.794: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:45.794: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:45.794: INFO: 
Feb  2 12:08:45.794: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 12:08:46.831: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  2 12:08:46.832: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  }]
Feb  2 12:08:46.832: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:46.832: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:46.832: INFO: 
Feb  2 12:08:46.832: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 12:08:48.085: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  2 12:08:48.085: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  }]
Feb  2 12:08:48.085: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:48.085: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:48.085: INFO: 
Feb  2 12:08:48.085: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 12:08:49.220: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  2 12:08:49.220: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  }]
Feb  2 12:08:49.220: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:49.220: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:49.220: INFO: 
Feb  2 12:08:49.220: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 12:08:50.233: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  2 12:08:50.233: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  }]
Feb  2 12:08:50.233: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:50.233: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:50.233: INFO: 
Feb  2 12:08:50.233: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 12:08:51.242: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  2 12:08:51.242: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  }]
Feb  2 12:08:51.242: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:08 +0000 UTC  }]
Feb  2 12:08:51.242: INFO: 
Feb  2 12:08:51.242: INFO: StatefulSet ss has not reached scale 0, at 2
Feb  2 12:08:52.254: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  2 12:08:52.254: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:07:37 +0000 UTC  }]
Feb  2 12:08:52.254: INFO: 
Feb  2 12:08:52.254: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-cxbrq
Feb  2 12:08:53.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:08:53.448: INFO: rc: 1
Feb  2 12:08:53.448: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00104a060 exit status 1   true [0xc000d60e28 0xc000d60e40 0xc000d60ea0] [0xc000d60e28 0xc000d60e40 0xc000d60ea0] [0xc000d60e38 0xc000d60e80] [0x935700 0x935700] 0xc002355500 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb  2 12:09:03.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:09:03.648: INFO: rc: 1
Feb  2 12:09:03.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00104a180 exit status 1   true [0xc000d60eb8 0xc000d60ed0 0xc000d60ef8] [0xc000d60eb8 0xc000d60ed0 0xc000d60ef8] [0xc000d60ec8 0xc000d60ee0] [0x935700 0x935700] 0xc0023557a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:09:13.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:09:13.776: INFO: rc: 1
Feb  2 12:09:13.777: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00104a600 exit status 1   true [0xc000d60f10 0xc000d60f48 0xc000d60fa0] [0xc000d60f10 0xc000d60f48 0xc000d60fa0] [0xc000d60f28 0xc000d60f88] [0x935700 0x935700] 0xc002355a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:09:23.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:09:23.966: INFO: rc: 1
Feb  2 12:09:23.967: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a30600 exit status 1   true [0xc001cbc180 0xc001cbc198 0xc001cbc1b0] [0xc001cbc180 0xc001cbc198 0xc001cbc1b0] [0xc001cbc190 0xc001cbc1a8] [0x935700 0x935700] 0xc001654360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:09:33.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:09:34.167: INFO: rc: 1
Feb  2 12:09:34.168: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001456120 exit status 1   true [0xc000e0c000 0xc000e0c018 0xc000e0c030] [0xc000e0c000 0xc000e0c018 0xc000e0c030] [0xc000e0c010 0xc000e0c028] [0x935700 0x935700] 0xc001072b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:09:44.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:09:44.280: INFO: rc: 1
Feb  2 12:09:44.280: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001de6540 exit status 1   true [0xc00056e010 0xc00056e098 0xc00056e260] [0xc00056e010 0xc00056e098 0xc00056e260] [0xc00056e090 0xc00056e220] [0x935700 0x935700] 0xc0022e8360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:09:54.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:09:54.440: INFO: rc: 1
Feb  2 12:09:54.441: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0022801b0 exit status 1   true [0xc001a82000 0xc001a82058 0xc001a82070] [0xc001a82000 0xc001a82058 0xc001a82070] [0xc001a82040 0xc001a82068] [0x935700 0x935700] 0xc002116300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:10:04.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:10:04.604: INFO: rc: 1
Feb  2 12:10:04.605: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001cde1e0 exit status 1   true [0xc0003cada8 0xc0003cafd8 0xc0003cb058] [0xc0003cada8 0xc0003cafd8 0xc0003cb058] [0xc0003cafc0 0xc0003cb038] [0x935700 0x935700] 0xc001c91da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:10:14.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:10:14.750: INFO: rc: 1
Feb  2 12:10:14.750: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001cde330 exit status 1   true [0xc0003cb070 0xc0003cb158 0xc0003cb1d0] [0xc0003cb070 0xc0003cb158 0xc0003cb1d0] [0xc0003cb0e0 0xc0003cb1b0] [0x935700 0x935700] 0xc001eb0180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:10:24.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:10:24.849: INFO: rc: 1
Feb  2 12:10:24.849: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001de66c0 exit status 1   true [0xc00056e270 0xc00056e328 0xc00056e390] [0xc00056e270 0xc00056e328 0xc00056e390] [0xc00056e2a8 0xc00056e378] [0x935700 0x935700] 0xc0022e8de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:10:34.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:10:34.965: INFO: rc: 1
Feb  2 12:10:34.966: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002280300 exit status 1   true [0xc001a82078 0xc001a82090 0xc001a820b0] [0xc001a82078 0xc001a82090 0xc001a820b0] [0xc001a82088 0xc001a820a8] [0x935700 0x935700] 0xc0021165a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:10:44.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:10:45.136: INFO: rc: 1
Feb  2 12:10:45.136: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001456240 exit status 1   true [0xc000e0c038 0xc000e0c050 0xc000e0c068] [0xc000e0c038 0xc000e0c050 0xc000e0c068] [0xc000e0c048 0xc000e0c060] [0x935700 0x935700] 0xc001072ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:10:55.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:10:55.303: INFO: rc: 1
Feb  2 12:10:55.304: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001cde4e0 exit status 1   true [0xc0003cb1d8 0xc0003cb210 0xc0003cb258] [0xc0003cb1d8 0xc0003cb210 0xc0003cb258] [0xc0003cb200 0xc0003cb240] [0x935700 0x935700] 0xc001eb1260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:11:05.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:11:05.439: INFO: rc: 1
Feb  2 12:11:05.439: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002280480 exit status 1   true [0xc001a820b8 0xc001a820d0 0xc001a820e8] [0xc001a820b8 0xc001a820d0 0xc001a820e8] [0xc001a820c8 0xc001a820e0] [0x935700 0x935700] 0xc0021171a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:11:15.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:11:15.615: INFO: rc: 1
Feb  2 12:11:15.615: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001456390 exit status 1   true [0xc000e0c070 0xc000e0c088 0xc000e0c0a0] [0xc000e0c070 0xc000e0c088 0xc000e0c0a0] [0xc000e0c080 0xc000e0c098] [0x935700 0x935700] 0xc001073260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:11:25.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:11:25.807: INFO: rc: 1
Feb  2 12:11:25.807: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001cde1b0 exit status 1   true [0xc0003cada8 0xc0003cafd8 0xc0003cb058] [0xc0003cada8 0xc0003cafd8 0xc0003cb058] [0xc0003cafc0 0xc0003cb038] [0x935700 0x935700] 0xc001c91da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:11:35.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:11:35.980: INFO: rc: 1
Feb  2 12:11:35.980: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001de6570 exit status 1   true [0xc00056e010 0xc00056e098 0xc00056e260] [0xc00056e010 0xc00056e098 0xc00056e260] [0xc00056e090 0xc00056e220] [0x935700 0x935700] 0xc001eb01e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:11:45.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:11:46.114: INFO: rc: 1
Feb  2 12:11:46.115: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001cde360 exit status 1   true [0xc0003cb070 0xc0003cb158 0xc0003cb1d0] [0xc0003cb070 0xc0003cb158 0xc0003cb1d0] [0xc0003cb0e0 0xc0003cb1b0] [0x935700 0x935700] 0xc0022e81e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:11:56.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:11:56.248: INFO: rc: 1
Feb  2 12:11:56.248: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001456150 exit status 1   true [0xc001a82000 0xc001a82058 0xc001a82070] [0xc001a82000 0xc001a82058 0xc001a82070] [0xc001a82040 0xc001a82068] [0x935700 0x935700] 0xc002116300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:12:06.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:12:06.406: INFO: rc: 1
Feb  2 12:12:06.407: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001cde510 exit status 1   true [0xc0003cb1d8 0xc0003cb210 0xc0003cb258] [0xc0003cb1d8 0xc0003cb210 0xc0003cb258] [0xc0003cb200 0xc0003cb240] [0x935700 0x935700] 0xc0022e8d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:12:16.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:12:16.626: INFO: rc: 1
Feb  2 12:12:16.627: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002280240 exit status 1   true [0xc000e0c000 0xc000e0c018 0xc000e0c030] [0xc000e0c000 0xc000e0c018 0xc000e0c030] [0xc000e0c010 0xc000e0c028] [0x935700 0x935700] 0xc001072b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:12:26.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:12:26.755: INFO: rc: 1
Feb  2 12:12:26.755: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002280390 exit status 1   true [0xc000e0c038 0xc000e0c050 0xc000e0c068] [0xc000e0c038 0xc000e0c050 0xc000e0c068] [0xc000e0c048 0xc000e0c060] [0x935700 0x935700] 0xc001072ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:12:36.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:12:36.933: INFO: rc: 1
Feb  2 12:12:36.934: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002280570 exit status 1   true [0xc000e0c070 0xc000e0c088 0xc000e0c0a0] [0xc000e0c070 0xc000e0c088 0xc000e0c0a0] [0xc000e0c080 0xc000e0c098] [0x935700 0x935700] 0xc001073260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:12:46.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:12:47.083: INFO: rc: 1
Feb  2 12:12:47.084: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002280690 exit status 1   true [0xc000e0c0a8 0xc000e0c0c0 0xc000e0c0d8] [0xc000e0c0a8 0xc000e0c0c0 0xc000e0c0d8] [0xc000e0c0b8 0xc000e0c0d0] [0x935700 0x935700] 0xc001073560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:12:57.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:12:57.247: INFO: rc: 1
Feb  2 12:12:57.247: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0022807e0 exit status 1   true [0xc000e0c0e0 0xc000e0c100 0xc000e0c118] [0xc000e0c0e0 0xc000e0c100 0xc000e0c118] [0xc000e0c0f0 0xc000e0c110] [0x935700 0x935700] 0xc001073da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:13:07.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:13:07.419: INFO: rc: 1
Feb  2 12:13:07.421: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014562a0 exit status 1   true [0xc001a82078 0xc001a82090 0xc001a820b0] [0xc001a82078 0xc001a82090 0xc001a820b0] [0xc001a82088 0xc001a820a8] [0x935700 0x935700] 0xc0021165a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:13:17.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:13:17.594: INFO: rc: 1
Feb  2 12:13:17.594: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001456420 exit status 1   true [0xc001a820b8 0xc001a820d0 0xc001a820e8] [0xc001a820b8 0xc001a820d0 0xc001a820e8] [0xc001a820c8 0xc001a820e0] [0x935700 0x935700] 0xc0021171a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:13:27.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:13:27.758: INFO: rc: 1
Feb  2 12:13:27.758: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b601e0 exit status 1   true [0xc001cbc008 0xc001cbc020 0xc001cbc038] [0xc001cbc008 0xc001cbc020 0xc001cbc038] [0xc001cbc018 0xc001cbc030] [0x935700 0x935700] 0xc001e08240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:13:37.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:13:37.990: INFO: rc: 1
Feb  2 12:13:37.990: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001de6540 exit status 1   true [0xc00056e010 0xc00056e098 0xc00056e260] [0xc00056e010 0xc00056e098 0xc00056e260] [0xc00056e090 0xc00056e220] [0x935700 0x935700] 0xc001c91da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:13:47.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:13:48.185: INFO: rc: 1
Feb  2 12:13:48.185: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001de66f0 exit status 1   true [0xc00056e270 0xc00056e328 0xc00056e390] [0xc00056e270 0xc00056e328 0xc00056e390] [0xc00056e2a8 0xc00056e378] [0x935700 0x935700] 0xc001eb0180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  2 12:13:58.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-cxbrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 12:13:58.349: INFO: rc: 1
Feb  2 12:13:58.349: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb  2 12:13:58.349: INFO: Scaling statefulset ss to 0
Feb  2 12:13:58.377: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  2 12:13:58.382: INFO: Deleting all statefulset in ns e2e-tests-statefulset-cxbrq
Feb  2 12:13:58.386: INFO: Scaling statefulset ss to 0
Feb  2 12:13:58.407: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 12:13:58.411: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:13:58.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-cxbrq" for this suite.
Feb  2 12:14:06.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:14:06.824: INFO: namespace: e2e-tests-statefulset-cxbrq, resource: bindings, ignored listing per whitelist
Feb  2 12:14:06.826: INFO: namespace e2e-tests-statefulset-cxbrq deletion completed in 8.290752748s

• [SLOW TEST:390.443 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:14:06.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Feb  2 12:14:07.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8vnm8'
Feb  2 12:14:09.142: INFO: stderr: ""
Feb  2 12:14:09.142: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Feb  2 12:14:10.156: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:14:10.156: INFO: Found 0 / 1
Feb  2 12:14:11.155: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:14:11.155: INFO: Found 0 / 1
Feb  2 12:14:12.172: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:14:12.173: INFO: Found 0 / 1
Feb  2 12:14:13.163: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:14:13.163: INFO: Found 0 / 1
Feb  2 12:14:14.427: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:14:14.427: INFO: Found 0 / 1
Feb  2 12:14:15.156: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:14:15.156: INFO: Found 0 / 1
Feb  2 12:14:16.160: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:14:16.160: INFO: Found 0 / 1
Feb  2 12:14:17.158: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:14:17.158: INFO: Found 1 / 1
Feb  2 12:14:17.158: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  2 12:14:17.164: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:14:17.164: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb  2 12:14:17.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wx94h redis-master --namespace=e2e-tests-kubectl-8vnm8'
Feb  2 12:14:17.425: INFO: stderr: ""
Feb  2 12:14:17.425: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Feb 12:14:16.158 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Feb 12:14:16.159 # Server started, Redis version 3.2.12\n1:M 02 Feb 12:14:16.160 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Feb 12:14:16.160 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb  2 12:14:17.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wx94h redis-master --namespace=e2e-tests-kubectl-8vnm8 --tail=1'
Feb  2 12:14:17.610: INFO: stderr: ""
Feb  2 12:14:17.610: INFO: stdout: "1:M 02 Feb 12:14:16.160 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb  2 12:14:17.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wx94h redis-master --namespace=e2e-tests-kubectl-8vnm8 --limit-bytes=1'
Feb  2 12:14:17.795: INFO: stderr: ""
Feb  2 12:14:17.795: INFO: stdout: " "
STEP: exposing timestamps
Feb  2 12:14:17.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wx94h redis-master --namespace=e2e-tests-kubectl-8vnm8 --tail=1 --timestamps'
Feb  2 12:14:17.994: INFO: stderr: ""
Feb  2 12:14:17.994: INFO: stdout: "2020-02-02T12:14:16.160500117Z 1:M 02 Feb 12:14:16.160 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb  2 12:14:20.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wx94h redis-master --namespace=e2e-tests-kubectl-8vnm8 --since=1s'
Feb  2 12:14:20.710: INFO: stderr: ""
Feb  2 12:14:20.710: INFO: stdout: ""
Feb  2 12:14:20.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wx94h redis-master --namespace=e2e-tests-kubectl-8vnm8 --since=24h'
Feb  2 12:14:20.947: INFO: stderr: ""
Feb  2 12:14:20.947: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Feb 12:14:16.158 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Feb 12:14:16.159 # Server started, Redis version 3.2.12\n1:M 02 Feb 12:14:16.160 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Feb 12:14:16.160 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Feb  2 12:14:20.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8vnm8'
Feb  2 12:14:21.112: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 12:14:21.112: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb  2 12:14:21.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-8vnm8'
Feb  2 12:14:21.235: INFO: stderr: "No resources found.\n"
Feb  2 12:14:21.235: INFO: stdout: ""
Feb  2 12:14:21.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-8vnm8 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  2 12:14:21.390: INFO: stderr: ""
Feb  2 12:14:21.390: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:14:21.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8vnm8" for this suite.
Feb  2 12:14:45.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:14:45.643: INFO: namespace: e2e-tests-kubectl-8vnm8, resource: bindings, ignored listing per whitelist
Feb  2 12:14:45.652: INFO: namespace e2e-tests-kubectl-8vnm8 deletion completed in 24.249037564s

• [SLOW TEST:38.826 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:14:45.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Feb  2 12:14:45.857: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:14:45.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x268m" for this suite.
Feb  2 12:14:51.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:14:52.060: INFO: namespace: e2e-tests-kubectl-x268m, resource: bindings, ignored listing per whitelist
Feb  2 12:14:52.191: INFO: namespace e2e-tests-kubectl-x268m deletion completed in 6.233559389s

• [SLOW TEST:6.538 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:14:52.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Feb  2 12:14:52.431: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  2 12:14:52.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qmrh9'
Feb  2 12:14:52.898: INFO: stderr: ""
Feb  2 12:14:52.898: INFO: stdout: "service/redis-slave created\n"
Feb  2 12:14:52.899: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  2 12:14:52.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qmrh9'
Feb  2 12:14:53.399: INFO: stderr: ""
Feb  2 12:14:53.399: INFO: stdout: "service/redis-master created\n"
Feb  2 12:14:53.400: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  2 12:14:53.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qmrh9'
Feb  2 12:14:53.756: INFO: stderr: ""
Feb  2 12:14:53.757: INFO: stdout: "service/frontend created\n"
Feb  2 12:14:53.757: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  2 12:14:53.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qmrh9'
Feb  2 12:14:54.076: INFO: stderr: ""
Feb  2 12:14:54.076: INFO: stdout: "deployment.extensions/frontend created\n"
Feb  2 12:14:54.077: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  2 12:14:54.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qmrh9'
Feb  2 12:14:54.378: INFO: stderr: ""
Feb  2 12:14:54.378: INFO: stdout: "deployment.extensions/redis-master created\n"
Feb  2 12:14:54.379: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  2 12:14:54.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qmrh9'
Feb  2 12:14:55.370: INFO: stderr: ""
Feb  2 12:14:55.370: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Feb  2 12:14:55.371: INFO: Waiting for all frontend pods to be Running.
Feb  2 12:15:20.423: INFO: Waiting for frontend to serve content.
Feb  2 12:15:20.767: INFO: Trying to add a new entry to the guestbook.
Feb  2 12:15:20.823: INFO: Verifying that added entry can be retrieved.
Feb  2 12:15:23.859: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Feb  2 12:15:28.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qmrh9'
Feb  2 12:15:29.247: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 12:15:29.247: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  2 12:15:29.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qmrh9'
Feb  2 12:15:29.593: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 12:15:29.594: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  2 12:15:29.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qmrh9'
Feb  2 12:15:29.931: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 12:15:29.931: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  2 12:15:29.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qmrh9'
Feb  2 12:15:30.092: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 12:15:30.093: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  2 12:15:30.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qmrh9'
Feb  2 12:15:30.436: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 12:15:30.437: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  2 12:15:30.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qmrh9'
Feb  2 12:15:30.901: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 12:15:30.901: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:15:30.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qmrh9" for this suite.
Feb  2 12:16:15.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:16:15.303: INFO: namespace: e2e-tests-kubectl-qmrh9, resource: bindings, ignored listing per whitelist
Feb  2 12:16:15.332: INFO: namespace e2e-tests-kubectl-qmrh9 deletion completed in 44.358746713s

• [SLOW TEST:83.141 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:16:15.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  2 12:16:15.467: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:16:38.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-kgdhs" for this suite.
Feb  2 12:17:02.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:17:03.141: INFO: namespace: e2e-tests-init-container-kgdhs, resource: bindings, ignored listing per whitelist
Feb  2 12:17:03.173: INFO: namespace e2e-tests-init-container-kgdhs deletion completed in 24.217607574s

• [SLOW TEST:47.841 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:17:03.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 12:17:03.381: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb  2 12:17:08.398: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  2 12:17:14.417: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  2 12:17:14.549: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-frrpt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-frrpt/deployments/test-cleanup-deployment,UID:f245bcd0-45b5-11ea-a994-fa163e34d433,ResourceVersion:20305616,Generation:1,CreationTimestamp:2020-02-02 12:17:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb  2 12:17:14.559: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:17:14.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-frrpt" for this suite.
Feb  2 12:17:27.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:17:27.264: INFO: namespace: e2e-tests-deployment-frrpt, resource: bindings, ignored listing per whitelist
Feb  2 12:17:27.283: INFO: namespace e2e-tests-deployment-frrpt deletion completed in 12.485540617s

• [SLOW TEST:24.109 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:17:27.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  2 12:17:47.611: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  2 12:17:47.652: INFO: Pod pod-with-prestop-http-hook still exists
Feb  2 12:17:49.652: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  2 12:17:49.725: INFO: Pod pod-with-prestop-http-hook still exists
Feb  2 12:17:51.653: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  2 12:17:51.702: INFO: Pod pod-with-prestop-http-hook still exists
Feb  2 12:17:53.653: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  2 12:17:53.665: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:17:53.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-cb8rc" for this suite.
Feb  2 12:18:17.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:18:17.932: INFO: namespace: e2e-tests-container-lifecycle-hook-cb8rc, resource: bindings, ignored listing per whitelist
Feb  2 12:18:18.037: INFO: namespace e2e-tests-container-lifecycle-hook-cb8rc deletion completed in 24.337040607s

• [SLOW TEST:50.754 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:18:18.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 12:18:18.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-k6t8w'
Feb  2 12:18:18.557: INFO: stderr: ""
Feb  2 12:18:18.557: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb  2 12:18:28.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-k6t8w -o json'
Feb  2 12:18:28.772: INFO: stderr: ""
Feb  2 12:18:28.772: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-02T12:18:18Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-k6t8w\",\n        \"resourceVersion\": \"20305788\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-k6t8w/pods/e2e-test-nginx-pod\",\n        \"uid\": \"18790057-45b6-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-l42jd\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-l42jd\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-l42jd\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-02T12:18:18Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-02T12:18:28Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-02T12:18:28Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-02T12:18:18Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://d8f8f3a00ea8d1b2ad7a07aa0ac65e44f94dc220e4dfaf32dfb96914cea105ea\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-02T12:18:26Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-02T12:18:18Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  2 12:18:28.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-k6t8w'
Feb  2 12:18:29.148: INFO: stderr: ""
Feb  2 12:18:29.148: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Feb  2 12:18:29.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-k6t8w'
Feb  2 12:18:39.230: INFO: stderr: ""
Feb  2 12:18:39.231: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:18:39.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k6t8w" for this suite.
Feb  2 12:18:45.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:18:45.635: INFO: namespace: e2e-tests-kubectl-k6t8w, resource: bindings, ignored listing per whitelist
Feb  2 12:18:45.662: INFO: namespace e2e-tests-kubectl-k6t8w deletion completed in 6.414148817s

• [SLOW TEST:27.625 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:18:45.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb  2 12:18:46.309: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-w8gtj" to be "success or failure"
Feb  2 12:18:46.318: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.930192ms
Feb  2 12:18:48.335: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024965465s
Feb  2 12:18:50.347: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037779699s
Feb  2 12:18:52.487: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177303467s
Feb  2 12:18:55.094: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.78446414s
Feb  2 12:18:57.122: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.811940991s
Feb  2 12:18:59.145: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.835108593s
STEP: Saw pod success
Feb  2 12:18:59.145: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  2 12:18:59.157: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  2 12:18:59.700: INFO: Waiting for pod pod-host-path-test to disappear
Feb  2 12:18:59.865: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:18:59.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-w8gtj" for this suite.
Feb  2 12:19:06.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:19:06.210: INFO: namespace: e2e-tests-hostpath-w8gtj, resource: bindings, ignored listing per whitelist
Feb  2 12:19:06.270: INFO: namespace e2e-tests-hostpath-w8gtj deletion completed in 6.343396954s

• [SLOW TEST:20.608 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:19:06.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Feb  2 12:19:06.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-swnv9 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb  2 12:19:15.800: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0202 12:19:14.752950    2414 log.go:172] (0xc00014c6e0) (0xc00086c5a0) Create stream\nI0202 12:19:14.753144    2414 log.go:172] (0xc00014c6e0) (0xc00086c5a0) Stream added, broadcasting: 1\nI0202 12:19:14.758616    2414 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0202 12:19:14.758746    2414 log.go:172] (0xc00014c6e0) (0xc00085eaa0) Create stream\nI0202 12:19:14.758758    2414 log.go:172] (0xc00014c6e0) (0xc00085eaa0) Stream added, broadcasting: 3\nI0202 12:19:14.759835    2414 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0202 12:19:14.759859    2414 log.go:172] (0xc00014c6e0) (0xc00086c640) Create stream\nI0202 12:19:14.759865    2414 log.go:172] (0xc00014c6e0) (0xc00086c640) Stream added, broadcasting: 5\nI0202 12:19:14.760681    2414 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0202 12:19:14.760701    2414 log.go:172] (0xc00014c6e0) (0xc0007d2000) Create stream\nI0202 12:19:14.760716    2414 log.go:172] (0xc00014c6e0) (0xc0007d2000) Stream added, broadcasting: 7\nI0202 12:19:14.761772    2414 log.go:172] (0xc00014c6e0) Reply frame received for 7\nI0202 12:19:14.762176    2414 log.go:172] (0xc00085eaa0) (3) Writing data frame\nI0202 12:19:14.762425    2414 log.go:172] (0xc00085eaa0) (3) Writing data frame\nI0202 12:19:14.769799    2414 log.go:172] (0xc00014c6e0) Data frame received for 5\nI0202 12:19:14.769829    2414 log.go:172] (0xc00086c640) (5) Data frame handling\nI0202 12:19:14.769858    2414 log.go:172] (0xc00086c640) (5) Data frame sent\nI0202 12:19:14.773814    2414 log.go:172] (0xc00014c6e0) Data frame received for 5\nI0202 12:19:14.773830    2414 log.go:172] (0xc00086c640) (5) Data frame handling\nI0202 12:19:14.773837    2414 log.go:172] (0xc00086c640) (5) Data frame sent\nI0202 12:19:15.747787    2414 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0202 12:19:15.747957    2414 log.go:172] (0xc00014c6e0) (0xc00085eaa0) Stream removed, broadcasting: 3\nI0202 12:19:15.748061    2414 log.go:172] (0xc00086c5a0) (1) Data frame handling\nI0202 12:19:15.748334    2414 log.go:172] (0xc00014c6e0) (0xc00086c640) Stream removed, broadcasting: 5\nI0202 12:19:15.748358    2414 log.go:172] (0xc00086c5a0) (1) Data frame sent\nI0202 12:19:15.748376    2414 log.go:172] (0xc00014c6e0) (0xc00086c5a0) Stream removed, broadcasting: 1\nI0202 12:19:15.748393    2414 log.go:172] (0xc00014c6e0) (0xc0007d2000) Stream removed, broadcasting: 7\nI0202 12:19:15.748419    2414 log.go:172] (0xc00014c6e0) Go away received\nI0202 12:19:15.749039    2414 log.go:172] (0xc00014c6e0) (0xc00086c5a0) Stream removed, broadcasting: 1\nI0202 12:19:15.749055    2414 log.go:172] (0xc00014c6e0) (0xc00085eaa0) Stream removed, broadcasting: 3\nI0202 12:19:15.749070    2414 log.go:172] (0xc00014c6e0) (0xc00086c640) Stream removed, broadcasting: 5\nI0202 12:19:15.749080    2414 log.go:172] (0xc00014c6e0) (0xc0007d2000) Stream removed, broadcasting: 7\n"
Feb  2 12:19:15.800: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:19:17.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-swnv9" for this suite.
Feb  2 12:19:24.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:19:24.546: INFO: namespace: e2e-tests-kubectl-swnv9, resource: bindings, ignored listing per whitelist
Feb  2 12:19:24.679: INFO: namespace e2e-tests-kubectl-swnv9 deletion completed in 6.842392111s

• [SLOW TEST:18.408 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:19:24.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-3ffb65d8-45b6-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 12:19:24.858: INFO: Waiting up to 5m0s for pod "pod-configmaps-40002116-45b6-11ea-8b99-0242ac110005" in namespace "e2e-tests-configmap-6p7g5" to be "success or failure"
Feb  2 12:19:24.903: INFO: Pod "pod-configmaps-40002116-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.000534ms
Feb  2 12:19:26.932: INFO: Pod "pod-configmaps-40002116-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073200386s
Feb  2 12:19:29.664: INFO: Pod "pod-configmaps-40002116-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.805434845s
Feb  2 12:19:31.677: INFO: Pod "pod-configmaps-40002116-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.818999939s
Feb  2 12:19:33.881: INFO: Pod "pod-configmaps-40002116-45b6-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.022520929s
STEP: Saw pod success
Feb  2 12:19:33.881: INFO: Pod "pod-configmaps-40002116-45b6-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:19:34.073: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-40002116-45b6-11ea-8b99-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  2 12:19:34.436: INFO: Waiting for pod pod-configmaps-40002116-45b6-11ea-8b99-0242ac110005 to disappear
Feb  2 12:19:34.464: INFO: Pod pod-configmaps-40002116-45b6-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:19:34.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6p7g5" for this suite.
Feb  2 12:19:40.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:19:40.704: INFO: namespace: e2e-tests-configmap-6p7g5, resource: bindings, ignored listing per whitelist
Feb  2 12:19:40.752: INFO: namespace e2e-tests-configmap-6p7g5 deletion completed in 6.269298911s

• [SLOW TEST:16.073 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:19:40.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0202 12:19:44.598441       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  2 12:19:44.598: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:19:44.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-r5hn4" for this suite.
Feb  2 12:19:51.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:19:51.682: INFO: namespace: e2e-tests-gc-r5hn4, resource: bindings, ignored listing per whitelist
Feb  2 12:19:51.744: INFO: namespace e2e-tests-gc-r5hn4 deletion completed in 6.757097216s

• [SLOW TEST:10.992 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:19:51.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 12:19:51.983: INFO: Waiting up to 5m0s for pod "downwardapi-volume-502b861f-45b6-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-c56ck" to be "success or failure"
Feb  2 12:19:52.018: INFO: Pod "downwardapi-volume-502b861f-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.831467ms
Feb  2 12:19:54.049: INFO: Pod "downwardapi-volume-502b861f-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066289739s
Feb  2 12:19:56.060: INFO: Pod "downwardapi-volume-502b861f-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077605464s
Feb  2 12:19:58.081: INFO: Pod "downwardapi-volume-502b861f-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098487044s
Feb  2 12:20:00.158: INFO: Pod "downwardapi-volume-502b861f-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175565992s
Feb  2 12:20:02.693: INFO: Pod "downwardapi-volume-502b861f-45b6-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.710562661s
STEP: Saw pod success
Feb  2 12:20:02.694: INFO: Pod "downwardapi-volume-502b861f-45b6-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:20:02.708: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-502b861f-45b6-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 12:20:03.100: INFO: Waiting for pod downwardapi-volume-502b861f-45b6-11ea-8b99-0242ac110005 to disappear
Feb  2 12:20:03.238: INFO: Pod downwardapi-volume-502b861f-45b6-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:20:03.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c56ck" for this suite.
Feb  2 12:20:09.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:20:09.319: INFO: namespace: e2e-tests-projected-c56ck, resource: bindings, ignored listing per whitelist
Feb  2 12:20:09.466: INFO: namespace e2e-tests-projected-c56ck deletion completed in 6.219913086s

• [SLOW TEST:17.722 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:20:09.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-9h8mz
Feb  2 12:20:21.735: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-9h8mz
STEP: checking the pod's current state and verifying that restartCount is present
Feb  2 12:20:21.743: INFO: Initial restart count of pod liveness-http is 0
Feb  2 12:20:44.001: INFO: Restart count of pod e2e-tests-container-probe-9h8mz/liveness-http is now 1 (22.257370746s elapsed)
Feb  2 12:21:02.247: INFO: Restart count of pod e2e-tests-container-probe-9h8mz/liveness-http is now 2 (40.504116299s elapsed)
Feb  2 12:21:22.446: INFO: Restart count of pod e2e-tests-container-probe-9h8mz/liveness-http is now 3 (1m0.703106157s elapsed)
Feb  2 12:21:42.732: INFO: Restart count of pod e2e-tests-container-probe-9h8mz/liveness-http is now 4 (1m20.988750907s elapsed)
Feb  2 12:22:52.114: INFO: Restart count of pod e2e-tests-container-probe-9h8mz/liveness-http is now 5 (2m30.370706753s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:22:52.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9h8mz" for this suite.
Feb  2 12:22:58.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:22:58.750: INFO: namespace: e2e-tests-container-probe-9h8mz, resource: bindings, ignored listing per whitelist
Feb  2 12:22:58.776: INFO: namespace e2e-tests-container-probe-9h8mz deletion completed in 6.474947646s

• [SLOW TEST:169.309 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:22:58.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  2 12:23:07.627: INFO: Successfully updated pod "pod-update-activedeadlineseconds-bfa52591-45b6-11ea-8b99-0242ac110005"
Feb  2 12:23:07.627: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-bfa52591-45b6-11ea-8b99-0242ac110005" in namespace "e2e-tests-pods-l6qzp" to be "terminated due to deadline exceeded"
Feb  2 12:23:07.645: INFO: Pod "pod-update-activedeadlineseconds-bfa52591-45b6-11ea-8b99-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 17.39114ms
Feb  2 12:23:09.661: INFO: Pod "pod-update-activedeadlineseconds-bfa52591-45b6-11ea-8b99-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.033124788s
Feb  2 12:23:09.661: INFO: Pod "pod-update-activedeadlineseconds-bfa52591-45b6-11ea-8b99-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:23:09.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-l6qzp" for this suite.
Feb  2 12:23:16.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:23:16.043: INFO: namespace: e2e-tests-pods-l6qzp, resource: bindings, ignored listing per whitelist
Feb  2 12:23:16.226: INFO: namespace e2e-tests-pods-l6qzp deletion completed in 6.556494594s

• [SLOW TEST:17.449 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:23:16.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0202 12:23:29.765983       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  2 12:23:29.766: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:23:29.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-chdtc" for this suite.
Feb  2 12:23:51.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:23:52.040: INFO: namespace: e2e-tests-gc-chdtc, resource: bindings, ignored listing per whitelist
Feb  2 12:23:52.068: INFO: namespace e2e-tests-gc-chdtc deletion completed in 22.288851141s

• [SLOW TEST:35.842 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:23:52.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 12:23:52.271: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df6579d8-45b6-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-sdmsd" to be "success or failure"
Feb  2 12:23:52.282: INFO: Pod "downwardapi-volume-df6579d8-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.99064ms
Feb  2 12:23:54.291: INFO: Pod "downwardapi-volume-df6579d8-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019842142s
Feb  2 12:23:56.309: INFO: Pod "downwardapi-volume-df6579d8-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037537676s
Feb  2 12:23:58.539: INFO: Pod "downwardapi-volume-df6579d8-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267186795s
Feb  2 12:24:00.584: INFO: Pod "downwardapi-volume-df6579d8-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.312609685s
Feb  2 12:24:02.610: INFO: Pod "downwardapi-volume-df6579d8-45b6-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.338627751s
STEP: Saw pod success
Feb  2 12:24:02.610: INFO: Pod "downwardapi-volume-df6579d8-45b6-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:24:02.630: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-df6579d8-45b6-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 12:24:02.712: INFO: Waiting for pod downwardapi-volume-df6579d8-45b6-11ea-8b99-0242ac110005 to disappear
Feb  2 12:24:02.779: INFO: Pod downwardapi-volume-df6579d8-45b6-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:24:02.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sdmsd" for this suite.
Feb  2 12:24:08.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:24:08.946: INFO: namespace: e2e-tests-downward-api-sdmsd, resource: bindings, ignored listing per whitelist
Feb  2 12:24:09.002: INFO: namespace e2e-tests-downward-api-sdmsd deletion completed in 6.214981737s

• [SLOW TEST:16.933 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:24:09.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 12:24:09.296: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.026044ms)
Feb  2 12:24:09.310: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.955827ms)
Feb  2 12:24:09.326: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.637846ms)
Feb  2 12:24:09.333: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.939261ms)
Feb  2 12:24:09.340: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.766365ms)
Feb  2 12:24:09.350: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.560811ms)
Feb  2 12:24:09.401: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 50.433535ms)
Feb  2 12:24:09.410: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.17006ms)
Feb  2 12:24:09.418: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.378412ms)
Feb  2 12:24:09.423: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.347156ms)
Feb  2 12:24:09.430: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.635021ms)
Feb  2 12:24:09.438: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.68569ms)
Feb  2 12:24:09.446: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.641085ms)
Feb  2 12:24:09.452: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.915277ms)
Feb  2 12:24:09.458: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.080396ms)
Feb  2 12:24:09.470: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.394889ms)
Feb  2 12:24:09.479: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.562222ms)
Feb  2 12:24:09.485: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.947671ms)
Feb  2 12:24:09.492: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.478185ms)
Feb  2 12:24:09.497: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.533408ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:24:09.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-5cl7n" for this suite.
Feb  2 12:24:15.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:24:15.591: INFO: namespace: e2e-tests-proxy-5cl7n, resource: bindings, ignored listing per whitelist
Feb  2 12:24:15.703: INFO: namespace e2e-tests-proxy-5cl7n deletion completed in 6.200411238s

• [SLOW TEST:6.701 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:24:15.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb  2 12:24:15.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rnn8d'
Feb  2 12:24:17.889: INFO: stderr: ""
Feb  2 12:24:17.889: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  2 12:24:19.595: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:24:19.595: INFO: Found 0 / 1
Feb  2 12:24:19.990: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:24:19.990: INFO: Found 0 / 1
Feb  2 12:24:20.930: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:24:20.930: INFO: Found 0 / 1
Feb  2 12:24:21.910: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:24:21.910: INFO: Found 0 / 1
Feb  2 12:24:23.590: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:24:23.590: INFO: Found 0 / 1
Feb  2 12:24:24.049: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:24:24.049: INFO: Found 0 / 1
Feb  2 12:24:25.750: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:24:25.750: INFO: Found 0 / 1
Feb  2 12:24:25.999: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:24:25.999: INFO: Found 0 / 1
Feb  2 12:24:26.924: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:24:26.924: INFO: Found 0 / 1
Feb  2 12:24:27.911: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:24:27.911: INFO: Found 0 / 1
Feb  2 12:24:28.911: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:24:28.912: INFO: Found 1 / 1
Feb  2 12:24:28.912: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb  2 12:24:28.920: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:24:28.920: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  2 12:24:28.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-5rxzz --namespace=e2e-tests-kubectl-rnn8d -p {"metadata":{"annotations":{"x":"y"}}}'
Feb  2 12:24:29.095: INFO: stderr: ""
Feb  2 12:24:29.095: INFO: stdout: "pod/redis-master-5rxzz patched\n"
STEP: checking annotations
Feb  2 12:24:29.106: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 12:24:29.106: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:24:29.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rnn8d" for this suite.
Feb  2 12:24:45.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:24:45.316: INFO: namespace: e2e-tests-kubectl-rnn8d, resource: bindings, ignored listing per whitelist
Feb  2 12:24:45.453: INFO: namespace e2e-tests-kubectl-rnn8d deletion completed in 16.341132098s

• [SLOW TEST:29.750 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:24:45.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-ff3d9219-45b6-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 12:24:45.712: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ff3eafc1-45b6-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-mw4bd" to be "success or failure"
Feb  2 12:24:45.722: INFO: Pod "pod-projected-secrets-ff3eafc1-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.50921ms
Feb  2 12:24:47.743: INFO: Pod "pod-projected-secrets-ff3eafc1-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03081087s
Feb  2 12:24:49.751: INFO: Pod "pod-projected-secrets-ff3eafc1-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038803264s
Feb  2 12:24:51.932: INFO: Pod "pod-projected-secrets-ff3eafc1-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220329639s
Feb  2 12:24:54.131: INFO: Pod "pod-projected-secrets-ff3eafc1-45b6-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.419196201s
Feb  2 12:24:56.154: INFO: Pod "pod-projected-secrets-ff3eafc1-45b6-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.442081207s
STEP: Saw pod success
Feb  2 12:24:56.154: INFO: Pod "pod-projected-secrets-ff3eafc1-45b6-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:24:56.169: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-ff3eafc1-45b6-11ea-8b99-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  2 12:24:57.582: INFO: Waiting for pod pod-projected-secrets-ff3eafc1-45b6-11ea-8b99-0242ac110005 to disappear
Feb  2 12:24:57.684: INFO: Pod pod-projected-secrets-ff3eafc1-45b6-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:24:57.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mw4bd" for this suite.
Feb  2 12:25:03.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:25:03.919: INFO: namespace: e2e-tests-projected-mw4bd, resource: bindings, ignored listing per whitelist
Feb  2 12:25:03.984: INFO: namespace e2e-tests-projected-mw4bd deletion completed in 6.289307743s

• [SLOW TEST:18.531 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:25:03.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-62cvt
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  2 12:25:04.199: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  2 12:25:40.444: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-62cvt PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 12:25:40.445: INFO: >>> kubeConfig: /root/.kube/config
I0202 12:25:40.572695       9 log.go:172] (0xc001d9c0b0) (0xc001fbc280) Create stream
I0202 12:25:40.572839       9 log.go:172] (0xc001d9c0b0) (0xc001fbc280) Stream added, broadcasting: 1
I0202 12:25:40.588745       9 log.go:172] (0xc001d9c0b0) Reply frame received for 1
I0202 12:25:40.588824       9 log.go:172] (0xc001d9c0b0) (0xc001fbc320) Create stream
I0202 12:25:40.588842       9 log.go:172] (0xc001d9c0b0) (0xc001fbc320) Stream added, broadcasting: 3
I0202 12:25:40.591512       9 log.go:172] (0xc001d9c0b0) Reply frame received for 3
I0202 12:25:40.591544       9 log.go:172] (0xc001d9c0b0) (0xc000a12140) Create stream
I0202 12:25:40.591560       9 log.go:172] (0xc001d9c0b0) (0xc000a12140) Stream added, broadcasting: 5
I0202 12:25:40.594140       9 log.go:172] (0xc001d9c0b0) Reply frame received for 5
I0202 12:25:40.871708       9 log.go:172] (0xc001d9c0b0) Data frame received for 3
I0202 12:25:40.871760       9 log.go:172] (0xc001fbc320) (3) Data frame handling
I0202 12:25:40.871774       9 log.go:172] (0xc001fbc320) (3) Data frame sent
I0202 12:25:41.068586       9 log.go:172] (0xc001d9c0b0) Data frame received for 1
I0202 12:25:41.068675       9 log.go:172] (0xc001fbc280) (1) Data frame handling
I0202 12:25:41.068706       9 log.go:172] (0xc001fbc280) (1) Data frame sent
I0202 12:25:41.078333       9 log.go:172] (0xc001d9c0b0) (0xc001fbc280) Stream removed, broadcasting: 1
I0202 12:25:41.082323       9 log.go:172] (0xc001d9c0b0) (0xc001fbc320) Stream removed, broadcasting: 3
I0202 12:25:41.082464       9 log.go:172] (0xc001d9c0b0) (0xc000a12140) Stream removed, broadcasting: 5
I0202 12:25:41.082503       9 log.go:172] (0xc001d9c0b0) Go away received
I0202 12:25:41.082909       9 log.go:172] (0xc001d9c0b0) (0xc001fbc280) Stream removed, broadcasting: 1
I0202 12:25:41.082946       9 log.go:172] (0xc001d9c0b0) (0xc001fbc320) Stream removed, broadcasting: 3
I0202 12:25:41.082970       9 log.go:172] (0xc001d9c0b0) (0xc000a12140) Stream removed, broadcasting: 5
Feb  2 12:25:41.083: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:25:41.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-62cvt" for this suite.
Feb  2 12:26:05.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:26:05.403: INFO: namespace: e2e-tests-pod-network-test-62cvt, resource: bindings, ignored listing per whitelist
Feb  2 12:26:05.403: INFO: namespace e2e-tests-pod-network-test-62cvt deletion completed in 24.296134032s

• [SLOW TEST:61.418 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:26:05.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 12:26:05.845: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  2 12:26:10.973: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  2 12:26:15.028: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  2 12:26:17.044: INFO: Creating deployment "test-rollover-deployment"
Feb  2 12:26:17.075: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  2 12:26:19.097: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  2 12:26:19.115: INFO: Ensure that both replica sets have 1 created replica
Feb  2 12:26:19.131: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  2 12:26:19.153: INFO: Updating deployment test-rollover-deployment
Feb  2 12:26:19.153: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  2 12:26:21.193: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  2 12:26:21.333: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  2 12:26:21.357: INFO: all replica sets need to contain the pod-template-hash label
Feb  2 12:26:21.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 12:26:24.070: INFO: all replica sets need to contain the pod-template-hash label
Feb  2 12:26:24.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 12:26:25.376: INFO: all replica sets need to contain the pod-template-hash label
Feb  2 12:26:25.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 12:26:27.386: INFO: all replica sets need to contain the pod-template-hash label
Feb  2 12:26:27.386: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 12:26:29.428: INFO: all replica sets need to contain the pod-template-hash label
Feb  2 12:26:29.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 12:26:31.377: INFO: all replica sets need to contain the pod-template-hash label
Feb  2 12:26:31.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 12:26:33.382: INFO: all replica sets need to contain the pod-template-hash label
Feb  2 12:26:33.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243192, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 12:26:35.424: INFO: all replica sets need to contain the pod-template-hash label
Feb  2 12:26:35.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243192, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 12:26:37.380: INFO: all replica sets need to contain the pod-template-hash label
Feb  2 12:26:37.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243192, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 12:26:39.391: INFO: all replica sets need to contain the pod-template-hash label
Feb  2 12:26:39.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243192, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 12:26:41.393: INFO: all replica sets need to contain the pod-template-hash label
Feb  2 12:26:41.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243192, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243177, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 12:26:43.432: INFO: 
Feb  2 12:26:43.432: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  2 12:26:43.445: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-p65ln,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p65ln/deployments/test-rollover-deployment,UID:35b360d1-45b7-11ea-a994-fa163e34d433,ResourceVersion:20306949,Generation:2,CreationTimestamp:2020-02-02 12:26:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-02 12:26:17 +0000 UTC 2020-02-02 12:26:17 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-02 12:26:42 +0000 UTC 2020-02-02 12:26:17 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  2 12:26:43.452: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-p65ln,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p65ln/replicasets/test-rollover-deployment-5b8479fdb6,UID:36f51f3d-45b7-11ea-a994-fa163e34d433,ResourceVersion:20306940,Generation:2,CreationTimestamp:2020-02-02 12:26:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 35b360d1-45b7-11ea-a994-fa163e34d433 0xc001088677 0xc001088678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  2 12:26:43.452: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  2 12:26:43.452: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-p65ln,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p65ln/replicasets/test-rollover-controller,UID:2ef02939-45b7-11ea-a994-fa163e34d433,ResourceVersion:20306948,Generation:2,CreationTimestamp:2020-02-02 12:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 35b360d1-45b7-11ea-a994-fa163e34d433 0xc00103b547 0xc00103b548}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  2 12:26:43.452: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-p65ln,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p65ln/replicasets/test-rollover-deployment-58494b7559,UID:35bd5811-45b7-11ea-a994-fa163e34d433,ResourceVersion:20306899,Generation:2,CreationTimestamp:2020-02-02 12:26:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 35b360d1-45b7-11ea-a994-fa163e34d433 0xc00103b607 0xc00103b608}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  2 12:26:43.465: INFO: Pod "test-rollover-deployment-5b8479fdb6-lkrsn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-lkrsn,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-p65ln,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p65ln/pods/test-rollover-deployment-5b8479fdb6-lkrsn,UID:37cd449d-45b7-11ea-a994-fa163e34d433,ResourceVersion:20306925,Generation:0,CreationTimestamp:2020-02-02 12:26:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 36f51f3d-45b7-11ea-a994-fa163e34d433 0xc001089ff7 0xc001089ff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wdjrj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wdjrj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-wdjrj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0008d4890} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0008d48b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:26:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:26:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:26:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 12:26:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-02 12:26:21 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-02 12:26:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://c2f1936d419d78b2c00fe0861e5f283c0158ce6b69608e0dd81ade05932eb79f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:26:43.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-p65ln" for this suite.
Feb  2 12:26:52.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:26:52.170: INFO: namespace: e2e-tests-deployment-p65ln, resource: bindings, ignored listing per whitelist
Feb  2 12:26:52.272: INFO: namespace e2e-tests-deployment-p65ln deletion completed in 8.800220886s

• [SLOW TEST:46.870 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:26:52.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-4b6473c5-45b7-11ea-8b99-0242ac110005
STEP: Creating secret with name s-test-opt-upd-4b6474a8-45b7-11ea-8b99-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-4b6473c5-45b7-11ea-8b99-0242ac110005
STEP: Updating secret s-test-opt-upd-4b6474a8-45b7-11ea-8b99-0242ac110005
STEP: Creating secret with name s-test-opt-create-4b647589-45b7-11ea-8b99-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:27:11.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g7mvc" for this suite.
Feb  2 12:27:35.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:27:36.057: INFO: namespace: e2e-tests-projected-g7mvc, resource: bindings, ignored listing per whitelist
Feb  2 12:27:36.114: INFO: namespace e2e-tests-projected-g7mvc deletion completed in 24.23463945s

• [SLOW TEST:43.841 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:27:36.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:28:09.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-sntwr" for this suite.
Feb  2 12:28:33.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:28:33.705: INFO: namespace: e2e-tests-replication-controller-sntwr, resource: bindings, ignored listing per whitelist
Feb  2 12:28:33.736: INFO: namespace e2e-tests-replication-controller-sntwr deletion completed in 24.244633068s

• [SLOW TEST:57.622 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:28:33.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:28:44.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-bg97v" for this suite.
Feb  2 12:29:38.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:29:38.465: INFO: namespace: e2e-tests-kubelet-test-bg97v, resource: bindings, ignored listing per whitelist
Feb  2 12:29:38.563: INFO: namespace e2e-tests-kubelet-test-bg97v deletion completed in 54.214454981s

• [SLOW TEST:64.828 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:29:38.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  2 12:29:39.102: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-zmfxk,SelfLink:/api/v1/namespaces/e2e-tests-watch-zmfxk/configmaps/e2e-watch-test-resource-version,UID:ae17472d-45b7-11ea-a994-fa163e34d433,ResourceVersion:20307293,Generation:0,CreationTimestamp:2020-02-02 12:29:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  2 12:29:39.102: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-zmfxk,SelfLink:/api/v1/namespaces/e2e-tests-watch-zmfxk/configmaps/e2e-watch-test-resource-version,UID:ae17472d-45b7-11ea-a994-fa163e34d433,ResourceVersion:20307294,Generation:0,CreationTimestamp:2020-02-02 12:29:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:29:39.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-zmfxk" for this suite.
Feb  2 12:29:45.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:29:45.232: INFO: namespace: e2e-tests-watch-zmfxk, resource: bindings, ignored listing per whitelist
Feb  2 12:29:45.346: INFO: namespace e2e-tests-watch-zmfxk deletion completed in 6.238736504s

• [SLOW TEST:6.781 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:29:45.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 12:29:45.605: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:29:46.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-rmjxx" for this suite.
Feb  2 12:29:52.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:29:53.065: INFO: namespace: e2e-tests-custom-resource-definition-rmjxx, resource: bindings, ignored listing per whitelist
Feb  2 12:29:53.363: INFO: namespace e2e-tests-custom-resource-definition-rmjxx deletion completed in 6.491295834s

• [SLOW TEST:8.016 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:29:53.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-b6b6fd7e-45b7-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 12:29:53.575: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b6b7c67c-45b7-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-djf7d" to be "success or failure"
Feb  2 12:29:53.582: INFO: Pod "pod-projected-configmaps-b6b7c67c-45b7-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.410712ms
Feb  2 12:29:55.602: INFO: Pod "pod-projected-configmaps-b6b7c67c-45b7-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027383217s
Feb  2 12:29:57.621: INFO: Pod "pod-projected-configmaps-b6b7c67c-45b7-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046261761s
Feb  2 12:29:59.831: INFO: Pod "pod-projected-configmaps-b6b7c67c-45b7-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.255621346s
Feb  2 12:30:01.838: INFO: Pod "pod-projected-configmaps-b6b7c67c-45b7-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263187984s
Feb  2 12:30:03.870: INFO: Pod "pod-projected-configmaps-b6b7c67c-45b7-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.294862885s
STEP: Saw pod success
Feb  2 12:30:03.870: INFO: Pod "pod-projected-configmaps-b6b7c67c-45b7-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:30:03.881: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b6b7c67c-45b7-11ea-8b99-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  2 12:30:04.346: INFO: Waiting for pod pod-projected-configmaps-b6b7c67c-45b7-11ea-8b99-0242ac110005 to disappear
Feb  2 12:30:04.654: INFO: Pod pod-projected-configmaps-b6b7c67c-45b7-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:30:04.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-djf7d" for this suite.
Feb  2 12:30:10.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:30:11.106: INFO: namespace: e2e-tests-projected-djf7d, resource: bindings, ignored listing per whitelist
Feb  2 12:30:11.111: INFO: namespace e2e-tests-projected-djf7d deletion completed in 6.432243785s

• [SLOW TEST:17.748 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:30:11.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  2 12:30:21.148: INFO: Successfully updated pod "annotationupdatec15d9d57-45b7-11ea-8b99-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:30:23.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hq8ln" for this suite.
Feb  2 12:30:47.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:30:47.586: INFO: namespace: e2e-tests-projected-hq8ln, resource: bindings, ignored listing per whitelist
Feb  2 12:30:47.614: INFO: namespace e2e-tests-projected-hq8ln deletion completed in 24.248422409s

• [SLOW TEST:36.502 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:30:47.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-d72396c8-45b7-11ea-8b99-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-d723976a-45b7-11ea-8b99-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-d72396c8-45b7-11ea-8b99-0242ac110005
STEP: Updating configmap cm-test-opt-upd-d723976a-45b7-11ea-8b99-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-d723979d-45b7-11ea-8b99-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:31:06.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9m4pt" for this suite.
Feb  2 12:31:30.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:31:30.606: INFO: namespace: e2e-tests-configmap-9m4pt, resource: bindings, ignored listing per whitelist
Feb  2 12:31:30.834: INFO: namespace e2e-tests-configmap-9m4pt deletion completed in 24.430059913s

• [SLOW TEST:43.219 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:31:30.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 12:31:31.098: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0e123d6-45b7-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-xqbpq" to be "success or failure"
Feb  2 12:31:31.129: INFO: Pod "downwardapi-volume-f0e123d6-45b7-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.185111ms
Feb  2 12:31:33.242: INFO: Pod "downwardapi-volume-f0e123d6-45b7-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144833857s
Feb  2 12:31:35.261: INFO: Pod "downwardapi-volume-f0e123d6-45b7-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163776515s
Feb  2 12:31:37.473: INFO: Pod "downwardapi-volume-f0e123d6-45b7-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.375259943s
Feb  2 12:31:39.513: INFO: Pod "downwardapi-volume-f0e123d6-45b7-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.41518133s
Feb  2 12:31:41.526: INFO: Pod "downwardapi-volume-f0e123d6-45b7-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.428767931s
STEP: Saw pod success
Feb  2 12:31:41.527: INFO: Pod "downwardapi-volume-f0e123d6-45b7-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:31:41.538: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f0e123d6-45b7-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 12:31:41.772: INFO: Waiting for pod downwardapi-volume-f0e123d6-45b7-11ea-8b99-0242ac110005 to disappear
Feb  2 12:31:41.936: INFO: Pod downwardapi-volume-f0e123d6-45b7-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:31:41.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xqbpq" for this suite.
Feb  2 12:31:48.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:31:48.785: INFO: namespace: e2e-tests-downward-api-xqbpq, resource: bindings, ignored listing per whitelist
Feb  2 12:31:48.929: INFO: namespace e2e-tests-downward-api-xqbpq deletion completed in 6.972624052s

• [SLOW TEST:18.095 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:31:48.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  2 12:31:49.164: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  2 12:31:49.172: INFO: Waiting for terminating namespaces to be deleted...
Feb  2 12:31:49.175: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  2 12:31:49.190: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  2 12:31:49.190: INFO: 	Container coredns ready: true, restart count 0
Feb  2 12:31:49.190: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  2 12:31:49.190: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  2 12:31:49.190: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  2 12:31:49.190: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  2 12:31:49.190: INFO: 	Container weave ready: true, restart count 0
Feb  2 12:31:49.190: INFO: 	Container weave-npc ready: true, restart count 0
Feb  2 12:31:49.190: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  2 12:31:49.190: INFO: 	Container coredns ready: true, restart count 0
Feb  2 12:31:49.190: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  2 12:31:49.190: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  2 12:31:49.190: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Feb  2 12:31:49.303: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb  2 12:31:49.303: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb  2 12:31:49.303: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb  2 12:31:49.303: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Feb  2 12:31:49.303: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Feb  2 12:31:49.303: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb  2 12:31:49.303: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb  2 12:31:49.303: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fbbd3d0a-45b7-11ea-8b99-0242ac110005.15ef95ceb9b4e82e], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-mk42f/filler-pod-fbbd3d0a-45b7-11ea-8b99-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fbbd3d0a-45b7-11ea-8b99-0242ac110005.15ef95cfccfa8967], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fbbd3d0a-45b7-11ea-8b99-0242ac110005.15ef95d035ebf822], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fbbd3d0a-45b7-11ea-8b99-0242ac110005.15ef95d069a07654], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ef95d10fecdd2d], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:32:00.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-mk42f" for this suite.
Feb  2 12:32:10.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:32:10.774: INFO: namespace: e2e-tests-sched-pred-mk42f, resource: bindings, ignored listing per whitelist
Feb  2 12:32:10.801: INFO: namespace e2e-tests-sched-pred-mk42f deletion completed in 10.166387754s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:21.872 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:32:10.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  2 12:32:11.276: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-rc78r,SelfLink:/api/v1/namespaces/e2e-tests-watch-rc78r/configmaps/e2e-watch-test-watch-closed,UID:08cef3df-45b8-11ea-a994-fa163e34d433,ResourceVersion:20307661,Generation:0,CreationTimestamp:2020-02-02 12:32:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  2 12:32:11.276: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-rc78r,SelfLink:/api/v1/namespaces/e2e-tests-watch-rc78r/configmaps/e2e-watch-test-watch-closed,UID:08cef3df-45b8-11ea-a994-fa163e34d433,ResourceVersion:20307662,Generation:0,CreationTimestamp:2020-02-02 12:32:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  2 12:32:11.320: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-rc78r,SelfLink:/api/v1/namespaces/e2e-tests-watch-rc78r/configmaps/e2e-watch-test-watch-closed,UID:08cef3df-45b8-11ea-a994-fa163e34d433,ResourceVersion:20307663,Generation:0,CreationTimestamp:2020-02-02 12:32:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  2 12:32:11.321: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-rc78r,SelfLink:/api/v1/namespaces/e2e-tests-watch-rc78r/configmaps/e2e-watch-test-watch-closed,UID:08cef3df-45b8-11ea-a994-fa163e34d433,ResourceVersion:20307664,Generation:0,CreationTimestamp:2020-02-02 12:32:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:32:11.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-rc78r" for this suite.
Feb  2 12:32:17.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:32:17.525: INFO: namespace: e2e-tests-watch-rc78r, resource: bindings, ignored listing per whitelist
Feb  2 12:32:17.638: INFO: namespace e2e-tests-watch-rc78r deletion completed in 6.255416254s

• [SLOW TEST:6.837 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:32:17.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-cj2wk/configmap-test-0cb48f9d-45b8-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 12:32:17.908: INFO: Waiting up to 5m0s for pod "pod-configmaps-0cb561a5-45b8-11ea-8b99-0242ac110005" in namespace "e2e-tests-configmap-cj2wk" to be "success or failure"
Feb  2 12:32:17.929: INFO: Pod "pod-configmaps-0cb561a5-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.321656ms
Feb  2 12:32:19.945: INFO: Pod "pod-configmaps-0cb561a5-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036702386s
Feb  2 12:32:21.974: INFO: Pod "pod-configmaps-0cb561a5-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065841065s
Feb  2 12:32:23.995: INFO: Pod "pod-configmaps-0cb561a5-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086999119s
Feb  2 12:32:26.008: INFO: Pod "pod-configmaps-0cb561a5-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099597312s
Feb  2 12:32:28.130: INFO: Pod "pod-configmaps-0cb561a5-45b8-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.221765572s
STEP: Saw pod success
Feb  2 12:32:28.130: INFO: Pod "pod-configmaps-0cb561a5-45b8-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:32:28.143: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0cb561a5-45b8-11ea-8b99-0242ac110005 container env-test: 
STEP: delete the pod
Feb  2 12:32:28.496: INFO: Waiting for pod pod-configmaps-0cb561a5-45b8-11ea-8b99-0242ac110005 to disappear
Feb  2 12:32:28.511: INFO: Pod pod-configmaps-0cb561a5-45b8-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:32:28.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cj2wk" for this suite.
Feb  2 12:32:34.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:32:34.736: INFO: namespace: e2e-tests-configmap-cj2wk, resource: bindings, ignored listing per whitelist
Feb  2 12:32:34.739: INFO: namespace e2e-tests-configmap-cj2wk deletion completed in 6.214167807s

• [SLOW TEST:17.101 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:32:34.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Feb  2 12:32:34.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  2 12:32:35.081: INFO: stderr: ""
Feb  2 12:32:35.081: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:32:35.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-67rk6" for this suite.
Feb  2 12:32:41.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:32:41.178: INFO: namespace: e2e-tests-kubectl-67rk6, resource: bindings, ignored listing per whitelist
Feb  2 12:32:41.287: INFO: namespace e2e-tests-kubectl-67rk6 deletion completed in 6.193587344s

• [SLOW TEST:6.547 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:32:41.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-tzp9l in namespace e2e-tests-proxy-hsl96
I0202 12:32:41.631603       9 runners.go:184] Created replication controller with name: proxy-service-tzp9l, namespace: e2e-tests-proxy-hsl96, replica count: 1
I0202 12:32:42.682486       9 runners.go:184] proxy-service-tzp9l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 12:32:43.683078       9 runners.go:184] proxy-service-tzp9l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 12:32:44.683466       9 runners.go:184] proxy-service-tzp9l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 12:32:45.683957       9 runners.go:184] proxy-service-tzp9l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 12:32:46.684314       9 runners.go:184] proxy-service-tzp9l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 12:32:47.684722       9 runners.go:184] proxy-service-tzp9l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 12:32:48.685014       9 runners.go:184] proxy-service-tzp9l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 12:32:49.685508       9 runners.go:184] proxy-service-tzp9l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 12:32:50.686419       9 runners.go:184] proxy-service-tzp9l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0202 12:32:51.687046       9 runners.go:184] proxy-service-tzp9l Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  2 12:32:51.700: INFO: setup took 10.180688847s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  2 12:32:51.744: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-hsl96/pods/http:proxy-service-tzp9l-6gx55:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-2b62d5b4-45b8-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 12:33:09.585: INFO: Waiting up to 5m0s for pod "pod-configmaps-2b63c4d5-45b8-11ea-8b99-0242ac110005" in namespace "e2e-tests-configmap-72br7" to be "success or failure"
Feb  2 12:33:10.075: INFO: Pod "pod-configmaps-2b63c4d5-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 489.854665ms
Feb  2 12:33:12.100: INFO: Pod "pod-configmaps-2b63c4d5-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.51580257s
Feb  2 12:33:14.110: INFO: Pod "pod-configmaps-2b63c4d5-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.525729841s
Feb  2 12:33:16.117: INFO: Pod "pod-configmaps-2b63c4d5-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.532440336s
Feb  2 12:33:18.148: INFO: Pod "pod-configmaps-2b63c4d5-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.563136232s
Feb  2 12:33:20.164: INFO: Pod "pod-configmaps-2b63c4d5-45b8-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.579640553s
STEP: Saw pod success
Feb  2 12:33:20.164: INFO: Pod "pod-configmaps-2b63c4d5-45b8-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:33:20.170: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2b63c4d5-45b8-11ea-8b99-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  2 12:33:20.328: INFO: Waiting for pod pod-configmaps-2b63c4d5-45b8-11ea-8b99-0242ac110005 to disappear
Feb  2 12:33:20.343: INFO: Pod pod-configmaps-2b63c4d5-45b8-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:33:20.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-72br7" for this suite.
Feb  2 12:33:26.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:33:26.717: INFO: namespace: e2e-tests-configmap-72br7, resource: bindings, ignored listing per whitelist
Feb  2 12:33:26.759: INFO: namespace e2e-tests-configmap-72br7 deletion completed in 6.398723317s

• [SLOW TEST:17.673 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:33:26.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  2 12:33:38.119: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:33:39.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-q4qlp" for this suite.
Feb  2 12:34:07.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:34:08.203: INFO: namespace: e2e-tests-replicaset-q4qlp, resource: bindings, ignored listing per whitelist
Feb  2 12:34:08.267: INFO: namespace e2e-tests-replicaset-q4qlp deletion completed in 29.035362838s

• [SLOW TEST:41.506 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:34:08.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-4ecc5028-45b8-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 12:34:08.765: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4eda2345-45b8-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-p7rq8" to be "success or failure"
Feb  2 12:34:08.838: INFO: Pod "pod-projected-secrets-4eda2345-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 73.573091ms
Feb  2 12:34:10.851: INFO: Pod "pod-projected-secrets-4eda2345-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086609998s
Feb  2 12:34:12.872: INFO: Pod "pod-projected-secrets-4eda2345-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107111284s
Feb  2 12:34:14.958: INFO: Pod "pod-projected-secrets-4eda2345-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193493212s
Feb  2 12:34:16.996: INFO: Pod "pod-projected-secrets-4eda2345-45b8-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.231166603s
Feb  2 12:34:19.007: INFO: Pod "pod-projected-secrets-4eda2345-45b8-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.242144501s
STEP: Saw pod success
Feb  2 12:34:19.007: INFO: Pod "pod-projected-secrets-4eda2345-45b8-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:34:19.015: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-4eda2345-45b8-11ea-8b99-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  2 12:34:19.929: INFO: Waiting for pod pod-projected-secrets-4eda2345-45b8-11ea-8b99-0242ac110005 to disappear
Feb  2 12:34:20.162: INFO: Pod pod-projected-secrets-4eda2345-45b8-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:34:20.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p7rq8" for this suite.
Feb  2 12:34:26.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:34:26.316: INFO: namespace: e2e-tests-projected-p7rq8, resource: bindings, ignored listing per whitelist
Feb  2 12:34:26.482: INFO: namespace e2e-tests-projected-p7rq8 deletion completed in 6.307384664s

• [SLOW TEST:18.215 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:34:26.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 12:34:26.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-7zplg'
Feb  2 12:34:28.934: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  2 12:34:28.934: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Feb  2 12:34:31.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-7zplg'
Feb  2 12:34:31.994: INFO: stderr: ""
Feb  2 12:34:31.994: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:34:31.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7zplg" for this suite.
Feb  2 12:34:46.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:34:46.171: INFO: namespace: e2e-tests-kubectl-7zplg, resource: bindings, ignored listing per whitelist
Feb  2 12:34:46.250: INFO: namespace e2e-tests-kubectl-7zplg deletion completed in 14.246455274s

• [SLOW TEST:19.768 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:34:46.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Feb  2 12:34:56.674: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-6569044b-45b8-11ea-8b99-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-k5g9v", SelfLink:"/api/v1/namespaces/e2e-tests-pods-k5g9v/pods/pod-submit-remove-6569044b-45b8-11ea-8b99-0242ac110005", UID:"656b69af-45b8-11ea-a994-fa163e34d433", ResourceVersion:"20308082", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716243686, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"590179020"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mb72d", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002986740), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mb72d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002777478), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002450240), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027774b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027774d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0027774d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0027774dc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243686, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243695, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243695, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716243686, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc002a30d80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002a30e00), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://29abd47da3f71439fafa0084dd719e0161ca1046cfa8d5f0991c47b020c223a2"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:35:12.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-k5g9v" for this suite.
Feb  2 12:35:18.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:35:18.855: INFO: namespace: e2e-tests-pods-k5g9v, resource: bindings, ignored listing per whitelist
Feb  2 12:35:18.865: INFO: namespace e2e-tests-pods-k5g9v deletion completed in 6.158845519s

• [SLOW TEST:32.614 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:35:18.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-586zn
Feb  2 12:35:29.298: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-586zn
STEP: checking the pod's current state and verifying that restartCount is present
Feb  2 12:35:29.306: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:39:30.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-586zn" for this suite.
Feb  2 12:39:36.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:39:36.330: INFO: namespace: e2e-tests-container-probe-586zn, resource: bindings, ignored listing per whitelist
Feb  2 12:39:36.394: INFO: namespace e2e-tests-container-probe-586zn deletion completed in 6.238755056s

• [SLOW TEST:257.529 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:39:36.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-124b74a9-45b9-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 12:39:36.701: INFO: Waiting up to 5m0s for pod "pod-secrets-124c3d13-45b9-11ea-8b99-0242ac110005" in namespace "e2e-tests-secrets-cq5cd" to be "success or failure"
Feb  2 12:39:36.709: INFO: Pod "pod-secrets-124c3d13-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.288278ms
Feb  2 12:39:38.720: INFO: Pod "pod-secrets-124c3d13-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018899558s
Feb  2 12:39:40.745: INFO: Pod "pod-secrets-124c3d13-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044514802s
Feb  2 12:39:42.757: INFO: Pod "pod-secrets-124c3d13-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056302005s
Feb  2 12:39:44.768: INFO: Pod "pod-secrets-124c3d13-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06662203s
Feb  2 12:39:46.779: INFO: Pod "pod-secrets-124c3d13-45b9-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078209556s
STEP: Saw pod success
Feb  2 12:39:46.779: INFO: Pod "pod-secrets-124c3d13-45b9-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:39:46.782: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-124c3d13-45b9-11ea-8b99-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  2 12:39:47.517: INFO: Waiting for pod pod-secrets-124c3d13-45b9-11ea-8b99-0242ac110005 to disappear
Feb  2 12:39:47.878: INFO: Pod pod-secrets-124c3d13-45b9-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:39:47.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-cq5cd" for this suite.
Feb  2 12:39:53.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:39:54.030: INFO: namespace: e2e-tests-secrets-cq5cd, resource: bindings, ignored listing per whitelist
Feb  2 12:39:54.141: INFO: namespace e2e-tests-secrets-cq5cd deletion completed in 6.241003282s

• [SLOW TEST:17.747 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:39:54.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-9s9h2
Feb  2 12:40:02.628: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-9s9h2
STEP: checking the pod's current state and verifying that restartCount is present
Feb  2 12:40:02.639: INFO: Initial restart count of pod liveness-http is 0
Feb  2 12:40:24.938: INFO: Restart count of pod e2e-tests-container-probe-9s9h2/liveness-http is now 1 (22.298373898s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:40:24.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9s9h2" for this suite.
Feb  2 12:40:31.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:40:31.193: INFO: namespace: e2e-tests-container-probe-9s9h2, resource: bindings, ignored listing per whitelist
Feb  2 12:40:31.466: INFO: namespace e2e-tests-container-probe-9s9h2 deletion completed in 6.450883063s

• [SLOW TEST:37.324 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:40:31.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 12:40:31.695: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33180bf4-45b9-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-flq8k" to be "success or failure"
Feb  2 12:40:31.706: INFO: Pod "downwardapi-volume-33180bf4-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.378729ms
Feb  2 12:40:33.729: INFO: Pod "downwardapi-volume-33180bf4-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033321784s
Feb  2 12:40:35.775: INFO: Pod "downwardapi-volume-33180bf4-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080049368s
Feb  2 12:40:37.888: INFO: Pod "downwardapi-volume-33180bf4-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1925511s
Feb  2 12:40:39.909: INFO: Pod "downwardapi-volume-33180bf4-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21346119s
Feb  2 12:40:41.930: INFO: Pod "downwardapi-volume-33180bf4-45b9-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.234953443s
STEP: Saw pod success
Feb  2 12:40:41.930: INFO: Pod "downwardapi-volume-33180bf4-45b9-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:40:41.941: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-33180bf4-45b9-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 12:40:42.128: INFO: Waiting for pod downwardapi-volume-33180bf4-45b9-11ea-8b99-0242ac110005 to disappear
Feb  2 12:40:42.139: INFO: Pod downwardapi-volume-33180bf4-45b9-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:40:42.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-flq8k" for this suite.
Feb  2 12:40:48.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:40:48.740: INFO: namespace: e2e-tests-projected-flq8k, resource: bindings, ignored listing per whitelist
Feb  2 12:40:48.779: INFO: namespace e2e-tests-projected-flq8k deletion completed in 6.49843626s

• [SLOW TEST:17.312 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:40:48.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Feb  2 12:40:59.364: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:41:27.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-zmbdz" for this suite.
Feb  2 12:41:33.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:41:33.710: INFO: namespace: e2e-tests-namespaces-zmbdz, resource: bindings, ignored listing per whitelist
Feb  2 12:41:33.848: INFO: namespace e2e-tests-namespaces-zmbdz deletion completed in 6.243032475s
STEP: Destroying namespace "e2e-tests-nsdeletetest-6g7x4" for this suite.
Feb  2 12:41:33.852: INFO: Namespace e2e-tests-nsdeletetest-6g7x4 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-8w7cg" for this suite.
Feb  2 12:41:39.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:41:39.925: INFO: namespace: e2e-tests-nsdeletetest-8w7cg, resource: bindings, ignored listing per whitelist
Feb  2 12:41:40.221: INFO: namespace e2e-tests-nsdeletetest-8w7cg deletion completed in 6.368307745s

• [SLOW TEST:51.442 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:41:40.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:41:52.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-h44p8" for this suite.
Feb  2 12:42:34.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:42:34.765: INFO: namespace: e2e-tests-kubelet-test-h44p8, resource: bindings, ignored listing per whitelist
Feb  2 12:42:34.802: INFO: namespace e2e-tests-kubelet-test-h44p8 deletion completed in 42.13953709s

• [SLOW TEST:54.581 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:42:34.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 12:42:35.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c9d7a27-45b9-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-4wktl" to be "success or failure"
Feb  2 12:42:35.048: INFO: Pod "downwardapi-volume-7c9d7a27-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.156689ms
Feb  2 12:42:37.059: INFO: Pod "downwardapi-volume-7c9d7a27-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02685138s
Feb  2 12:42:39.074: INFO: Pod "downwardapi-volume-7c9d7a27-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042336707s
Feb  2 12:42:41.104: INFO: Pod "downwardapi-volume-7c9d7a27-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0715017s
Feb  2 12:42:43.137: INFO: Pod "downwardapi-volume-7c9d7a27-45b9-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105006495s
Feb  2 12:42:45.499: INFO: Pod "downwardapi-volume-7c9d7a27-45b9-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.467226856s
STEP: Saw pod success
Feb  2 12:42:45.499: INFO: Pod "downwardapi-volume-7c9d7a27-45b9-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:42:45.513: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7c9d7a27-45b9-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 12:42:46.076: INFO: Waiting for pod downwardapi-volume-7c9d7a27-45b9-11ea-8b99-0242ac110005 to disappear
Feb  2 12:42:46.309: INFO: Pod downwardapi-volume-7c9d7a27-45b9-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:42:46.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4wktl" for this suite.
Feb  2 12:42:52.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:42:52.766: INFO: namespace: e2e-tests-projected-4wktl, resource: bindings, ignored listing per whitelist
Feb  2 12:42:52.871: INFO: namespace e2e-tests-projected-4wktl deletion completed in 6.544059613s

• [SLOW TEST:18.069 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:42:52.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:42:53.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-hjch4" for this suite.
Feb  2 12:43:17.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:43:17.500: INFO: namespace: e2e-tests-pods-hjch4, resource: bindings, ignored listing per whitelist
Feb  2 12:43:17.892: INFO: namespace e2e-tests-pods-hjch4 deletion completed in 24.576900411s

• [SLOW TEST:25.020 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:43:17.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb  2 12:43:18.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:18.470: INFO: stderr: ""
Feb  2 12:43:18.470: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  2 12:43:18.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:18.828: INFO: stderr: ""
Feb  2 12:43:18.829: INFO: stdout: "update-demo-nautilus-l8kft update-demo-nautilus-q2mdg "
Feb  2 12:43:18.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8kft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:19.041: INFO: stderr: ""
Feb  2 12:43:19.041: INFO: stdout: ""
Feb  2 12:43:19.041: INFO: update-demo-nautilus-l8kft is created but not running
Feb  2 12:43:24.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:24.188: INFO: stderr: ""
Feb  2 12:43:24.188: INFO: stdout: "update-demo-nautilus-l8kft update-demo-nautilus-q2mdg "
Feb  2 12:43:24.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8kft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:24.292: INFO: stderr: ""
Feb  2 12:43:24.292: INFO: stdout: ""
Feb  2 12:43:24.292: INFO: update-demo-nautilus-l8kft is created but not running
Feb  2 12:43:29.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:29.498: INFO: stderr: ""
Feb  2 12:43:29.498: INFO: stdout: "update-demo-nautilus-l8kft update-demo-nautilus-q2mdg "
Feb  2 12:43:29.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8kft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:29.674: INFO: stderr: ""
Feb  2 12:43:29.674: INFO: stdout: ""
Feb  2 12:43:29.674: INFO: update-demo-nautilus-l8kft is created but not running
Feb  2 12:43:34.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:34.843: INFO: stderr: ""
Feb  2 12:43:34.843: INFO: stdout: "update-demo-nautilus-l8kft update-demo-nautilus-q2mdg "
Feb  2 12:43:34.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8kft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:34.993: INFO: stderr: ""
Feb  2 12:43:34.993: INFO: stdout: "true"
Feb  2 12:43:34.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8kft -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:35.146: INFO: stderr: ""
Feb  2 12:43:35.146: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 12:43:35.146: INFO: validating pod update-demo-nautilus-l8kft
Feb  2 12:43:35.172: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 12:43:35.172: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 12:43:35.172: INFO: update-demo-nautilus-l8kft is verified up and running
Feb  2 12:43:35.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q2mdg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:35.326: INFO: stderr: ""
Feb  2 12:43:35.326: INFO: stdout: "true"
Feb  2 12:43:35.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q2mdg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:35.430: INFO: stderr: ""
Feb  2 12:43:35.430: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 12:43:35.430: INFO: validating pod update-demo-nautilus-q2mdg
Feb  2 12:43:35.443: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 12:43:35.443: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 12:43:35.443: INFO: update-demo-nautilus-q2mdg is verified up and running
STEP: using delete to clean up resources
Feb  2 12:43:35.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:35.564: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 12:43:35.565: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  2 12:43:35.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-wsdbf'
Feb  2 12:43:35.705: INFO: stderr: "No resources found.\n"
Feb  2 12:43:35.705: INFO: stdout: ""
Feb  2 12:43:35.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-wsdbf -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  2 12:43:35.913: INFO: stderr: ""
Feb  2 12:43:35.913: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:43:35.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wsdbf" for this suite.
Feb  2 12:43:59.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:44:00.124: INFO: namespace: e2e-tests-kubectl-wsdbf, resource: bindings, ignored listing per whitelist
Feb  2 12:44:00.157: INFO: namespace e2e-tests-kubectl-wsdbf deletion completed in 24.221110908s

• [SLOW TEST:42.264 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:44:00.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  2 12:44:22.625: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  2 12:44:22.658: INFO: Pod pod-with-poststart-http-hook still exists
Feb  2 12:44:24.659: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  2 12:44:24.684: INFO: Pod pod-with-poststart-http-hook still exists
Feb  2 12:44:26.659: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  2 12:44:26.737: INFO: Pod pod-with-poststart-http-hook still exists
Feb  2 12:44:28.659: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  2 12:44:28.670: INFO: Pod pod-with-poststart-http-hook still exists
Feb  2 12:44:30.659: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  2 12:44:30.672: INFO: Pod pod-with-poststart-http-hook still exists
Feb  2 12:44:32.659: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  2 12:44:32.686: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:44:32.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jcqb8" for this suite.
Feb  2 12:45:00.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:45:00.757: INFO: namespace: e2e-tests-container-lifecycle-hook-jcqb8, resource: bindings, ignored listing per whitelist
Feb  2 12:45:00.872: INFO: namespace e2e-tests-container-lifecycle-hook-jcqb8 deletion completed in 28.175886125s

• [SLOW TEST:60.714 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:45:00.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb  2 12:45:27.291: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jwpv4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 12:45:27.291: INFO: >>> kubeConfig: /root/.kube/config
I0202 12:45:27.402518       9 log.go:172] (0xc000a3b080) (0xc000b53a40) Create stream
I0202 12:45:27.402588       9 log.go:172] (0xc000a3b080) (0xc000b53a40) Stream added, broadcasting: 1
I0202 12:45:27.409721       9 log.go:172] (0xc000a3b080) Reply frame received for 1
I0202 12:45:27.409782       9 log.go:172] (0xc000a3b080) (0xc0021ea8c0) Create stream
I0202 12:45:27.409797       9 log.go:172] (0xc000a3b080) (0xc0021ea8c0) Stream added, broadcasting: 3
I0202 12:45:27.411771       9 log.go:172] (0xc000a3b080) Reply frame received for 3
I0202 12:45:27.411846       9 log.go:172] (0xc000a3b080) (0xc000b53ae0) Create stream
I0202 12:45:27.411860       9 log.go:172] (0xc000a3b080) (0xc000b53ae0) Stream added, broadcasting: 5
I0202 12:45:27.416655       9 log.go:172] (0xc000a3b080) Reply frame received for 5
I0202 12:45:27.571547       9 log.go:172] (0xc000a3b080) Data frame received for 3
I0202 12:45:27.571660       9 log.go:172] (0xc0021ea8c0) (3) Data frame handling
I0202 12:45:27.571769       9 log.go:172] (0xc0021ea8c0) (3) Data frame sent
I0202 12:45:27.759490       9 log.go:172] (0xc000a3b080) Data frame received for 1
I0202 12:45:27.759576       9 log.go:172] (0xc000a3b080) (0xc0021ea8c0) Stream removed, broadcasting: 3
I0202 12:45:27.759658       9 log.go:172] (0xc000a3b080) (0xc000b53ae0) Stream removed, broadcasting: 5
I0202 12:45:27.759732       9 log.go:172] (0xc000b53a40) (1) Data frame handling
I0202 12:45:27.759804       9 log.go:172] (0xc000b53a40) (1) Data frame sent
I0202 12:45:27.759843       9 log.go:172] (0xc000a3b080) (0xc000b53a40) Stream removed, broadcasting: 1
I0202 12:45:27.759903       9 log.go:172] (0xc000a3b080) Go away received
I0202 12:45:27.760178       9 log.go:172] (0xc000a3b080) (0xc000b53a40) Stream removed, broadcasting: 1
I0202 12:45:27.760213       9 log.go:172] (0xc000a3b080) (0xc0021ea8c0) Stream removed, broadcasting: 3
I0202 12:45:27.760232       9 log.go:172] (0xc000a3b080) (0xc000b53ae0) Stream removed, broadcasting: 5
Feb  2 12:45:27.760: INFO: Exec stderr: ""
Feb  2 12:45:27.760: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jwpv4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 12:45:27.760: INFO: >>> kubeConfig: /root/.kube/config
I0202 12:45:27.923506       9 log.go:172] (0xc000bec6e0) (0xc0021eac80) Create stream
I0202 12:45:27.923614       9 log.go:172] (0xc000bec6e0) (0xc0021eac80) Stream added, broadcasting: 1
I0202 12:45:27.929998       9 log.go:172] (0xc000bec6e0) Reply frame received for 1
I0202 12:45:27.930044       9 log.go:172] (0xc000bec6e0) (0xc000b53b80) Create stream
I0202 12:45:27.930063       9 log.go:172] (0xc000bec6e0) (0xc000b53b80) Stream added, broadcasting: 3
I0202 12:45:27.931296       9 log.go:172] (0xc000bec6e0) Reply frame received for 3
I0202 12:45:27.931332       9 log.go:172] (0xc000bec6e0) (0xc001ce0a00) Create stream
I0202 12:45:27.931340       9 log.go:172] (0xc000bec6e0) (0xc001ce0a00) Stream added, broadcasting: 5
I0202 12:45:27.932545       9 log.go:172] (0xc000bec6e0) Reply frame received for 5
I0202 12:45:28.155283       9 log.go:172] (0xc000bec6e0) Data frame received for 3
I0202 12:45:28.155340       9 log.go:172] (0xc000b53b80) (3) Data frame handling
I0202 12:45:28.155416       9 log.go:172] (0xc000b53b80) (3) Data frame sent
I0202 12:45:28.289282       9 log.go:172] (0xc000bec6e0) Data frame received for 1
I0202 12:45:28.289342       9 log.go:172] (0xc0021eac80) (1) Data frame handling
I0202 12:45:28.289394       9 log.go:172] (0xc0021eac80) (1) Data frame sent
I0202 12:45:28.289518       9 log.go:172] (0xc000bec6e0) (0xc0021eac80) Stream removed, broadcasting: 1
I0202 12:45:28.289995       9 log.go:172] (0xc000bec6e0) (0xc000b53b80) Stream removed, broadcasting: 3
I0202 12:45:28.290112       9 log.go:172] (0xc000bec6e0) (0xc001ce0a00) Stream removed, broadcasting: 5
I0202 12:45:28.290205       9 log.go:172] (0xc000bec6e0) (0xc0021eac80) Stream removed, broadcasting: 1
I0202 12:45:28.290213       9 log.go:172] (0xc000bec6e0) (0xc000b53b80) Stream removed, broadcasting: 3
I0202 12:45:28.290220       9 log.go:172] (0xc000bec6e0) (0xc001ce0a00) Stream removed, broadcasting: 5
Feb  2 12:45:28.290: INFO: Exec stderr: ""
Feb  2 12:45:28.290: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jwpv4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 12:45:28.290: INFO: >>> kubeConfig: /root/.kube/config
I0202 12:45:28.384929       9 log.go:172] (0xc0023ee000) (0xc001ce0d20) Create stream
I0202 12:45:28.384983       9 log.go:172] (0xc0023ee000) (0xc001ce0d20) Stream added, broadcasting: 1
I0202 12:45:28.389908       9 log.go:172] (0xc0023ee000) Reply frame received for 1
I0202 12:45:28.389929       9 log.go:172] (0xc0023ee000) (0xc000b53cc0) Create stream
I0202 12:45:28.389936       9 log.go:172] (0xc0023ee000) (0xc000b53cc0) Stream added, broadcasting: 3
I0202 12:45:28.390997       9 log.go:172] (0xc0023ee000) Reply frame received for 3
I0202 12:45:28.391020       9 log.go:172] (0xc0023ee000) (0xc000b53e00) Create stream
I0202 12:45:28.391030       9 log.go:172] (0xc0023ee000) (0xc000b53e00) Stream added, broadcasting: 5
I0202 12:45:28.391831       9 log.go:172] (0xc0023ee000) Reply frame received for 5
I0202 12:45:28.603191       9 log.go:172] (0xc0023ee000) Data frame received for 3
I0202 12:45:28.603241       9 log.go:172] (0xc000b53cc0) (3) Data frame handling
I0202 12:45:28.603288       9 log.go:172] (0xc000b53cc0) (3) Data frame sent
I0202 12:45:28.734507       9 log.go:172] (0xc0023ee000) Data frame received for 1
I0202 12:45:28.734667       9 log.go:172] (0xc0023ee000) (0xc000b53cc0) Stream removed, broadcasting: 3
I0202 12:45:28.734743       9 log.go:172] (0xc001ce0d20) (1) Data frame handling
I0202 12:45:28.734804       9 log.go:172] (0xc0023ee000) (0xc000b53e00) Stream removed, broadcasting: 5
I0202 12:45:28.734864       9 log.go:172] (0xc001ce0d20) (1) Data frame sent
I0202 12:45:28.734894       9 log.go:172] (0xc0023ee000) (0xc001ce0d20) Stream removed, broadcasting: 1
I0202 12:45:28.734918       9 log.go:172] (0xc0023ee000) Go away received
I0202 12:45:28.735141       9 log.go:172] (0xc0023ee000) (0xc001ce0d20) Stream removed, broadcasting: 1
I0202 12:45:28.735164       9 log.go:172] (0xc0023ee000) (0xc000b53cc0) Stream removed, broadcasting: 3
I0202 12:45:28.735179       9 log.go:172] (0xc0023ee000) (0xc000b53e00) Stream removed, broadcasting: 5
Feb  2 12:45:28.735: INFO: Exec stderr: ""
Feb  2 12:45:28.735: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jwpv4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 12:45:28.735: INFO: >>> kubeConfig: /root/.kube/config
I0202 12:45:28.793770       9 log.go:172] (0xc0017182c0) (0xc0025ae280) Create stream
I0202 12:45:28.793856       9 log.go:172] (0xc0017182c0) (0xc0025ae280) Stream added, broadcasting: 1
I0202 12:45:28.796843       9 log.go:172] (0xc0017182c0) Reply frame received for 1
I0202 12:45:28.796867       9 log.go:172] (0xc0017182c0) (0xc002368140) Create stream
I0202 12:45:28.796874       9 log.go:172] (0xc0017182c0) (0xc002368140) Stream added, broadcasting: 3
I0202 12:45:28.797457       9 log.go:172] (0xc0017182c0) Reply frame received for 3
I0202 12:45:28.797475       9 log.go:172] (0xc0017182c0) (0xc0021eae60) Create stream
I0202 12:45:28.797483       9 log.go:172] (0xc0017182c0) (0xc0021eae60) Stream added, broadcasting: 5
I0202 12:45:28.798214       9 log.go:172] (0xc0017182c0) Reply frame received for 5
I0202 12:45:28.928453       9 log.go:172] (0xc0017182c0) Data frame received for 3
I0202 12:45:28.928496       9 log.go:172] (0xc002368140) (3) Data frame handling
I0202 12:45:28.928506       9 log.go:172] (0xc002368140) (3) Data frame sent
I0202 12:45:29.051684       9 log.go:172] (0xc0017182c0) Data frame received for 1
I0202 12:45:29.051782       9 log.go:172] (0xc0017182c0) (0xc002368140) Stream removed, broadcasting: 3
I0202 12:45:29.051828       9 log.go:172] (0xc0025ae280) (1) Data frame handling
I0202 12:45:29.051880       9 log.go:172] (0xc0025ae280) (1) Data frame sent
I0202 12:45:29.051908       9 log.go:172] (0xc0017182c0) (0xc0021eae60) Stream removed, broadcasting: 5
I0202 12:45:29.051974       9 log.go:172] (0xc0017182c0) (0xc0025ae280) Stream removed, broadcasting: 1
I0202 12:45:29.052003       9 log.go:172] (0xc0017182c0) Go away received
I0202 12:45:29.052294       9 log.go:172] (0xc0017182c0) (0xc0025ae280) Stream removed, broadcasting: 1
I0202 12:45:29.052307       9 log.go:172] (0xc0017182c0) (0xc002368140) Stream removed, broadcasting: 3
I0202 12:45:29.052315       9 log.go:172] (0xc0017182c0) (0xc0021eae60) Stream removed, broadcasting: 5
Feb  2 12:45:29.052: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb  2 12:45:29.052: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jwpv4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 12:45:29.052: INFO: >>> kubeConfig: /root/.kube/config
I0202 12:45:29.178326       9 log.go:172] (0xc000becbb0) (0xc0021eb220) Create stream
I0202 12:45:29.178412       9 log.go:172] (0xc000becbb0) (0xc0021eb220) Stream added, broadcasting: 1
I0202 12:45:29.182533       9 log.go:172] (0xc000becbb0) Reply frame received for 1
I0202 12:45:29.182616       9 log.go:172] (0xc000becbb0) (0xc0017bec80) Create stream
I0202 12:45:29.182629       9 log.go:172] (0xc000becbb0) (0xc0017bec80) Stream added, broadcasting: 3
I0202 12:45:29.183532       9 log.go:172] (0xc000becbb0) Reply frame received for 3
I0202 12:45:29.183551       9 log.go:172] (0xc000becbb0) (0xc0021eb360) Create stream
I0202 12:45:29.183561       9 log.go:172] (0xc000becbb0) (0xc0021eb360) Stream added, broadcasting: 5
I0202 12:45:29.184802       9 log.go:172] (0xc000becbb0) Reply frame received for 5
I0202 12:45:29.320843       9 log.go:172] (0xc000becbb0) Data frame received for 3
I0202 12:45:29.320888       9 log.go:172] (0xc0017bec80) (3) Data frame handling
I0202 12:45:29.320907       9 log.go:172] (0xc0017bec80) (3) Data frame sent
I0202 12:45:29.434588       9 log.go:172] (0xc000becbb0) (0xc0021eb360) Stream removed, broadcasting: 5
I0202 12:45:29.434764       9 log.go:172] (0xc000becbb0) Data frame received for 1
I0202 12:45:29.434782       9 log.go:172] (0xc0021eb220) (1) Data frame handling
I0202 12:45:29.434804       9 log.go:172] (0xc0021eb220) (1) Data frame sent
I0202 12:45:29.434842       9 log.go:172] (0xc000becbb0) (0xc0021eb220) Stream removed, broadcasting: 1
I0202 12:45:29.434996       9 log.go:172] (0xc000becbb0) (0xc0017bec80) Stream removed, broadcasting: 3
I0202 12:45:29.435022       9 log.go:172] (0xc000becbb0) Go away received
I0202 12:45:29.435266       9 log.go:172] (0xc000becbb0) (0xc0021eb220) Stream removed, broadcasting: 1
I0202 12:45:29.435296       9 log.go:172] (0xc000becbb0) (0xc0017bec80) Stream removed, broadcasting: 3
I0202 12:45:29.435309       9 log.go:172] (0xc000becbb0) (0xc0021eb360) Stream removed, broadcasting: 5
Feb  2 12:45:29.435: INFO: Exec stderr: ""
Feb  2 12:45:29.435: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jwpv4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 12:45:29.435: INFO: >>> kubeConfig: /root/.kube/config
I0202 12:45:29.501034       9 log.go:172] (0xc001718790) (0xc0025ae5a0) Create stream
I0202 12:45:29.501112       9 log.go:172] (0xc001718790) (0xc0025ae5a0) Stream added, broadcasting: 1
I0202 12:45:29.513216       9 log.go:172] (0xc001718790) Reply frame received for 1
I0202 12:45:29.513291       9 log.go:172] (0xc001718790) (0xc0023681e0) Create stream
I0202 12:45:29.513305       9 log.go:172] (0xc001718790) (0xc0023681e0) Stream added, broadcasting: 3
I0202 12:45:29.515117       9 log.go:172] (0xc001718790) Reply frame received for 3
I0202 12:45:29.515152       9 log.go:172] (0xc001718790) (0xc000b53ea0) Create stream
I0202 12:45:29.515165       9 log.go:172] (0xc001718790) (0xc000b53ea0) Stream added, broadcasting: 5
I0202 12:45:29.516340       9 log.go:172] (0xc001718790) Reply frame received for 5
I0202 12:45:29.666784       9 log.go:172] (0xc001718790) Data frame received for 3
I0202 12:45:29.666882       9 log.go:172] (0xc0023681e0) (3) Data frame handling
I0202 12:45:29.666920       9 log.go:172] (0xc0023681e0) (3) Data frame sent
I0202 12:45:29.764207       9 log.go:172] (0xc001718790) Data frame received for 1
I0202 12:45:29.764278       9 log.go:172] (0xc001718790) (0xc000b53ea0) Stream removed, broadcasting: 5
I0202 12:45:29.764354       9 log.go:172] (0xc0025ae5a0) (1) Data frame handling
I0202 12:45:29.764395       9 log.go:172] (0xc0025ae5a0) (1) Data frame sent
I0202 12:45:29.764416       9 log.go:172] (0xc001718790) (0xc0023681e0) Stream removed, broadcasting: 3
I0202 12:45:29.764443       9 log.go:172] (0xc001718790) (0xc0025ae5a0) Stream removed, broadcasting: 1
I0202 12:45:29.764459       9 log.go:172] (0xc001718790) Go away received
I0202 12:45:29.764581       9 log.go:172] (0xc001718790) (0xc0025ae5a0) Stream removed, broadcasting: 1
I0202 12:45:29.764591       9 log.go:172] (0xc001718790) (0xc0023681e0) Stream removed, broadcasting: 3
I0202 12:45:29.764596       9 log.go:172] (0xc001718790) (0xc000b53ea0) Stream removed, broadcasting: 5
Feb  2 12:45:29.764: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb  2 12:45:29.764: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jwpv4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 12:45:29.764: INFO: >>> kubeConfig: /root/.kube/config
I0202 12:45:29.854645       9 log.go:172] (0xc001718c60) (0xc0025ae820) Create stream
I0202 12:45:29.854727       9 log.go:172] (0xc001718c60) (0xc0025ae820) Stream added, broadcasting: 1
I0202 12:45:29.870220       9 log.go:172] (0xc001718c60) Reply frame received for 1
I0202 12:45:29.870323       9 log.go:172] (0xc001718c60) (0xc002368280) Create stream
I0202 12:45:29.870337       9 log.go:172] (0xc001718c60) (0xc002368280) Stream added, broadcasting: 3
I0202 12:45:29.872454       9 log.go:172] (0xc001718c60) Reply frame received for 3
I0202 12:45:29.872598       9 log.go:172] (0xc001718c60) (0xc000b53f40) Create stream
I0202 12:45:29.872619       9 log.go:172] (0xc001718c60) (0xc000b53f40) Stream added, broadcasting: 5
I0202 12:45:29.877636       9 log.go:172] (0xc001718c60) Reply frame received for 5
I0202 12:45:29.997960       9 log.go:172] (0xc001718c60) Data frame received for 3
I0202 12:45:29.998034       9 log.go:172] (0xc002368280) (3) Data frame handling
I0202 12:45:29.998074       9 log.go:172] (0xc002368280) (3) Data frame sent
I0202 12:45:30.174376       9 log.go:172] (0xc001718c60) Data frame received for 1
I0202 12:45:30.174474       9 log.go:172] (0xc001718c60) (0xc000b53f40) Stream removed, broadcasting: 5
I0202 12:45:30.174683       9 log.go:172] (0xc001718c60) (0xc002368280) Stream removed, broadcasting: 3
I0202 12:45:30.174762       9 log.go:172] (0xc0025ae820) (1) Data frame handling
I0202 12:45:30.174831       9 log.go:172] (0xc0025ae820) (1) Data frame sent
I0202 12:45:30.174852       9 log.go:172] (0xc001718c60) (0xc0025ae820) Stream removed, broadcasting: 1
I0202 12:45:30.174868       9 log.go:172] (0xc001718c60) Go away received
I0202 12:45:30.175214       9 log.go:172] (0xc001718c60) (0xc0025ae820) Stream removed, broadcasting: 1
I0202 12:45:30.175228       9 log.go:172] (0xc001718c60) (0xc002368280) Stream removed, broadcasting: 3
I0202 12:45:30.175235       9 log.go:172] (0xc001718c60) (0xc000b53f40) Stream removed, broadcasting: 5
Feb  2 12:45:30.175: INFO: Exec stderr: ""
Feb  2 12:45:30.175: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jwpv4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 12:45:30.175: INFO: >>> kubeConfig: /root/.kube/config
I0202 12:45:30.283308       9 log.go:172] (0xc000a3b550) (0xc0012681e0) Create stream
I0202 12:45:30.283344       9 log.go:172] (0xc000a3b550) (0xc0012681e0) Stream added, broadcasting: 1
I0202 12:45:30.291602       9 log.go:172] (0xc000a3b550) Reply frame received for 1
I0202 12:45:30.291634       9 log.go:172] (0xc000a3b550) (0xc001268320) Create stream
I0202 12:45:30.291640       9 log.go:172] (0xc000a3b550) (0xc001268320) Stream added, broadcasting: 3
I0202 12:45:30.292806       9 log.go:172] (0xc000a3b550) Reply frame received for 3
I0202 12:45:30.292827       9 log.go:172] (0xc000a3b550) (0xc0012683c0) Create stream
I0202 12:45:30.292839       9 log.go:172] (0xc000a3b550) (0xc0012683c0) Stream added, broadcasting: 5
I0202 12:45:30.294417       9 log.go:172] (0xc000a3b550) Reply frame received for 5
I0202 12:45:30.418176       9 log.go:172] (0xc000a3b550) Data frame received for 3
I0202 12:45:30.418216       9 log.go:172] (0xc001268320) (3) Data frame handling
I0202 12:45:30.418238       9 log.go:172] (0xc001268320) (3) Data frame sent
I0202 12:45:30.670796       9 log.go:172] (0xc000a3b550) (0xc001268320) Stream removed, broadcasting: 3
I0202 12:45:30.670850       9 log.go:172] (0xc000a3b550) Data frame received for 1
I0202 12:45:30.670872       9 log.go:172] (0xc000a3b550) (0xc0012683c0) Stream removed, broadcasting: 5
I0202 12:45:30.670900       9 log.go:172] (0xc0012681e0) (1) Data frame handling
I0202 12:45:30.670925       9 log.go:172] (0xc0012681e0) (1) Data frame sent
I0202 12:45:30.670944       9 log.go:172] (0xc000a3b550) (0xc0012681e0) Stream removed, broadcasting: 1
I0202 12:45:30.671038       9 log.go:172] (0xc000a3b550) Go away received
I0202 12:45:30.671181       9 log.go:172] (0xc000a3b550) (0xc0012681e0) Stream removed, broadcasting: 1
I0202 12:45:30.671194       9 log.go:172] (0xc000a3b550) (0xc001268320) Stream removed, broadcasting: 3
I0202 12:45:30.671215       9 log.go:172] (0xc000a3b550) (0xc0012683c0) Stream removed, broadcasting: 5
Feb  2 12:45:30.671: INFO: Exec stderr: ""
Feb  2 12:45:30.671: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jwpv4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 12:45:30.671: INFO: >>> kubeConfig: /root/.kube/config
I0202 12:45:30.751019       9 log.go:172] (0xc000a3ba20) (0xc001268780) Create stream
I0202 12:45:30.751076       9 log.go:172] (0xc000a3ba20) (0xc001268780) Stream added, broadcasting: 1
I0202 12:45:30.755997       9 log.go:172] (0xc000a3ba20) Reply frame received for 1
I0202 12:45:30.756108       9 log.go:172] (0xc000a3ba20) (0xc0025ae960) Create stream
I0202 12:45:30.756132       9 log.go:172] (0xc000a3ba20) (0xc0025ae960) Stream added, broadcasting: 3
I0202 12:45:30.757587       9 log.go:172] (0xc000a3ba20) Reply frame received for 3
I0202 12:45:30.757626       9 log.go:172] (0xc000a3ba20) (0xc002368320) Create stream
I0202 12:45:30.757644       9 log.go:172] (0xc000a3ba20) (0xc002368320) Stream added, broadcasting: 5
I0202 12:45:30.758576       9 log.go:172] (0xc000a3ba20) Reply frame received for 5
I0202 12:45:30.843865       9 log.go:172] (0xc000a3ba20) Data frame received for 3
I0202 12:45:30.843906       9 log.go:172] (0xc0025ae960) (3) Data frame handling
I0202 12:45:30.843954       9 log.go:172] (0xc0025ae960) (3) Data frame sent
I0202 12:45:30.974671       9 log.go:172] (0xc000a3ba20) (0xc0025ae960) Stream removed, broadcasting: 3
I0202 12:45:30.974739       9 log.go:172] (0xc000a3ba20) Data frame received for 1
I0202 12:45:30.974763       9 log.go:172] (0xc000a3ba20) (0xc002368320) Stream removed, broadcasting: 5
I0202 12:45:30.974801       9 log.go:172] (0xc001268780) (1) Data frame handling
I0202 12:45:30.974827       9 log.go:172] (0xc001268780) (1) Data frame sent
I0202 12:45:30.974841       9 log.go:172] (0xc000a3ba20) (0xc001268780) Stream removed, broadcasting: 1
I0202 12:45:30.974859       9 log.go:172] (0xc000a3ba20) Go away received
I0202 12:45:30.975077       9 log.go:172] (0xc000a3ba20) (0xc001268780) Stream removed, broadcasting: 1
I0202 12:45:30.975096       9 log.go:172] (0xc000a3ba20) (0xc0025ae960) Stream removed, broadcasting: 3
I0202 12:45:30.975107       9 log.go:172] (0xc000a3ba20) (0xc002368320) Stream removed, broadcasting: 5
Feb  2 12:45:30.975: INFO: Exec stderr: ""
Feb  2 12:45:30.975: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jwpv4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 12:45:30.975: INFO: >>> kubeConfig: /root/.kube/config
I0202 12:45:31.098214       9 log.go:172] (0xc000a3bef0) (0xc001268b40) Create stream
I0202 12:45:31.098266       9 log.go:172] (0xc000a3bef0) (0xc001268b40) Stream added, broadcasting: 1
I0202 12:45:31.105910       9 log.go:172] (0xc000a3bef0) Reply frame received for 1
I0202 12:45:31.105972       9 log.go:172] (0xc000a3bef0) (0xc002368460) Create stream
I0202 12:45:31.105999       9 log.go:172] (0xc000a3bef0) (0xc002368460) Stream added, broadcasting: 3
I0202 12:45:31.107217       9 log.go:172] (0xc000a3bef0) Reply frame received for 3
I0202 12:45:31.107235       9 log.go:172] (0xc000a3bef0) (0xc002368500) Create stream
I0202 12:45:31.107242       9 log.go:172] (0xc000a3bef0) (0xc002368500) Stream added, broadcasting: 5
I0202 12:45:31.108893       9 log.go:172] (0xc000a3bef0) Reply frame received for 5
I0202 12:45:31.240338       9 log.go:172] (0xc000a3bef0) Data frame received for 3
I0202 12:45:31.240400       9 log.go:172] (0xc002368460) (3) Data frame handling
I0202 12:45:31.240423       9 log.go:172] (0xc002368460) (3) Data frame sent
I0202 12:45:31.351283       9 log.go:172] (0xc000a3bef0) Data frame received for 1
I0202 12:45:31.351324       9 log.go:172] (0xc001268b40) (1) Data frame handling
I0202 12:45:31.351343       9 log.go:172] (0xc001268b40) (1) Data frame sent
I0202 12:45:31.351365       9 log.go:172] (0xc000a3bef0) (0xc001268b40) Stream removed, broadcasting: 1
I0202 12:45:31.352290       9 log.go:172] (0xc000a3bef0) (0xc002368460) Stream removed, broadcasting: 3
I0202 12:45:31.352382       9 log.go:172] (0xc000a3bef0) (0xc002368500) Stream removed, broadcasting: 5
I0202 12:45:31.352425       9 log.go:172] (0xc000a3bef0) Go away received
I0202 12:45:31.352496       9 log.go:172] (0xc000a3bef0) (0xc001268b40) Stream removed, broadcasting: 1
I0202 12:45:31.352518       9 log.go:172] (0xc000a3bef0) (0xc002368460) Stream removed, broadcasting: 3
I0202 12:45:31.352533       9 log.go:172] (0xc000a3bef0) (0xc002368500) Stream removed, broadcasting: 5
Feb  2 12:45:31.352: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:45:31.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-jwpv4" for this suite.
Feb  2 12:46:27.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:46:27.456: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-jwpv4, resource: bindings, ignored listing per whitelist
Feb  2 12:46:27.623: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-jwpv4 deletion completed in 56.251244576s

• [SLOW TEST:86.752 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:46:27.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-sn5fh/configmap-test-075fae3a-45ba-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 12:46:27.920: INFO: Waiting up to 5m0s for pod "pod-configmaps-076cccca-45ba-11ea-8b99-0242ac110005" in namespace "e2e-tests-configmap-sn5fh" to be "success or failure"
Feb  2 12:46:27.940: INFO: Pod "pod-configmaps-076cccca-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.469731ms
Feb  2 12:46:30.478: INFO: Pod "pod-configmaps-076cccca-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.557003305s
Feb  2 12:46:32.506: INFO: Pod "pod-configmaps-076cccca-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.585684645s
Feb  2 12:46:34.631: INFO: Pod "pod-configmaps-076cccca-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.710042146s
Feb  2 12:46:36.669: INFO: Pod "pod-configmaps-076cccca-45ba-11ea-8b99-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.748471351s
Feb  2 12:46:38.679: INFO: Pod "pod-configmaps-076cccca-45ba-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.758440468s
STEP: Saw pod success
Feb  2 12:46:38.679: INFO: Pod "pod-configmaps-076cccca-45ba-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:46:38.683: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-076cccca-45ba-11ea-8b99-0242ac110005 container env-test: 
STEP: delete the pod
Feb  2 12:46:39.232: INFO: Waiting for pod pod-configmaps-076cccca-45ba-11ea-8b99-0242ac110005 to disappear
Feb  2 12:46:39.442: INFO: Pod pod-configmaps-076cccca-45ba-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:46:39.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-sn5fh" for this suite.
Feb  2 12:46:45.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:46:45.612: INFO: namespace: e2e-tests-configmap-sn5fh, resource: bindings, ignored listing per whitelist
Feb  2 12:46:45.725: INFO: namespace e2e-tests-configmap-sn5fh deletion completed in 6.265426817s

• [SLOW TEST:18.101 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:46:45.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Feb  2 12:46:46.323: INFO: Waiting up to 5m0s for pod "var-expansion-1264a534-45ba-11ea-8b99-0242ac110005" in namespace "e2e-tests-var-expansion-87wtc" to be "success or failure"
Feb  2 12:46:46.333: INFO: Pod "var-expansion-1264a534-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.457069ms
Feb  2 12:46:48.457: INFO: Pod "var-expansion-1264a534-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133705s
Feb  2 12:46:50.482: INFO: Pod "var-expansion-1264a534-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159242089s
Feb  2 12:46:52.924: INFO: Pod "var-expansion-1264a534-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.601243204s
Feb  2 12:46:55.430: INFO: Pod "var-expansion-1264a534-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.106422618s
Feb  2 12:46:57.444: INFO: Pod "var-expansion-1264a534-45ba-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.120998141s
STEP: Saw pod success
Feb  2 12:46:57.444: INFO: Pod "var-expansion-1264a534-45ba-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:46:57.448: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-1264a534-45ba-11ea-8b99-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  2 12:46:57.622: INFO: Waiting for pod var-expansion-1264a534-45ba-11ea-8b99-0242ac110005 to disappear
Feb  2 12:46:57.670: INFO: Pod var-expansion-1264a534-45ba-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:46:57.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-87wtc" for this suite.
Feb  2 12:47:03.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:47:03.882: INFO: namespace: e2e-tests-var-expansion-87wtc, resource: bindings, ignored listing per whitelist
Feb  2 12:47:03.956: INFO: namespace e2e-tests-var-expansion-87wtc deletion completed in 6.273348969s

• [SLOW TEST:18.231 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:47:03.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-66nfj/secret-test-1d121f65-45ba-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 12:47:04.295: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d13df60-45ba-11ea-8b99-0242ac110005" in namespace "e2e-tests-secrets-66nfj" to be "success or failure"
Feb  2 12:47:04.310: INFO: Pod "pod-configmaps-1d13df60-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.964559ms
Feb  2 12:47:06.333: INFO: Pod "pod-configmaps-1d13df60-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03699099s
Feb  2 12:47:08.346: INFO: Pod "pod-configmaps-1d13df60-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050766105s
Feb  2 12:47:10.364: INFO: Pod "pod-configmaps-1d13df60-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068296389s
Feb  2 12:47:12.378: INFO: Pod "pod-configmaps-1d13df60-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082740332s
Feb  2 12:47:14.405: INFO: Pod "pod-configmaps-1d13df60-45ba-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109753051s
STEP: Saw pod success
Feb  2 12:47:14.405: INFO: Pod "pod-configmaps-1d13df60-45ba-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:47:14.417: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1d13df60-45ba-11ea-8b99-0242ac110005 container env-test: 
STEP: delete the pod
Feb  2 12:47:14.875: INFO: Waiting for pod pod-configmaps-1d13df60-45ba-11ea-8b99-0242ac110005 to disappear
Feb  2 12:47:14.886: INFO: Pod pod-configmaps-1d13df60-45ba-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:47:14.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-66nfj" for this suite.
Feb  2 12:47:22.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:47:23.074: INFO: namespace: e2e-tests-secrets-66nfj, resource: bindings, ignored listing per whitelist
Feb  2 12:47:23.173: INFO: namespace e2e-tests-secrets-66nfj deletion completed in 8.277451209s

• [SLOW TEST:19.216 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:47:23.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:48:23.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-6cfm6" for this suite.
Feb  2 12:48:31.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:48:32.036: INFO: namespace: e2e-tests-container-runtime-6cfm6, resource: bindings, ignored listing per whitelist
Feb  2 12:48:32.102: INFO: namespace e2e-tests-container-runtime-6cfm6 deletion completed in 8.199260683s

• [SLOW TEST:68.929 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:48:32.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Feb  2 12:48:32.420: INFO: Waiting up to 5m0s for pod "var-expansion-519cd317-45ba-11ea-8b99-0242ac110005" in namespace "e2e-tests-var-expansion-wb9rz" to be "success or failure"
Feb  2 12:48:32.447: INFO: Pod "var-expansion-519cd317-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.752798ms
Feb  2 12:48:34.706: INFO: Pod "var-expansion-519cd317-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28593253s
Feb  2 12:48:36.717: INFO: Pod "var-expansion-519cd317-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297118685s
Feb  2 12:48:38.833: INFO: Pod "var-expansion-519cd317-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413671265s
Feb  2 12:48:40.856: INFO: Pod "var-expansion-519cd317-45ba-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.436418922s
STEP: Saw pod success
Feb  2 12:48:40.856: INFO: Pod "var-expansion-519cd317-45ba-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:48:40.864: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-519cd317-45ba-11ea-8b99-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  2 12:48:40.972: INFO: Waiting for pod var-expansion-519cd317-45ba-11ea-8b99-0242ac110005 to disappear
Feb  2 12:48:41.005: INFO: Pod var-expansion-519cd317-45ba-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:48:41.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-wb9rz" for this suite.
Feb  2 12:48:47.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:48:47.278: INFO: namespace: e2e-tests-var-expansion-wb9rz, resource: bindings, ignored listing per whitelist
Feb  2 12:48:47.345: INFO: namespace e2e-tests-var-expansion-wb9rz deletion completed in 6.318911271s

• [SLOW TEST:15.243 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:48:47.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  2 12:48:57.770: INFO: Waiting up to 5m0s for pod "client-envvars-60b317cb-45ba-11ea-8b99-0242ac110005" in namespace "e2e-tests-pods-ztdjl" to be "success or failure"
Feb  2 12:48:57.785: INFO: Pod "client-envvars-60b317cb-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.883734ms
Feb  2 12:48:59.799: INFO: Pod "client-envvars-60b317cb-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027938697s
Feb  2 12:49:01.880: INFO: Pod "client-envvars-60b317cb-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109156956s
Feb  2 12:49:03.892: INFO: Pod "client-envvars-60b317cb-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121780028s
Feb  2 12:49:06.193: INFO: Pod "client-envvars-60b317cb-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.422114572s
Feb  2 12:49:08.209: INFO: Pod "client-envvars-60b317cb-45ba-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.438045181s
STEP: Saw pod success
Feb  2 12:49:08.209: INFO: Pod "client-envvars-60b317cb-45ba-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:49:08.214: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-60b317cb-45ba-11ea-8b99-0242ac110005 container env3cont: 
STEP: delete the pod
Feb  2 12:49:08.424: INFO: Waiting for pod client-envvars-60b317cb-45ba-11ea-8b99-0242ac110005 to disappear
Feb  2 12:49:08.434: INFO: Pod client-envvars-60b317cb-45ba-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:49:08.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ztdjl" for this suite.
Feb  2 12:49:54.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:49:54.749: INFO: namespace: e2e-tests-pods-ztdjl, resource: bindings, ignored listing per whitelist
Feb  2 12:49:54.790: INFO: namespace e2e-tests-pods-ztdjl deletion completed in 46.34173873s

• [SLOW TEST:67.444 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:49:54.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  2 12:49:55.155: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  2 12:50:00.180: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:50:01.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-s8rjz" for this suite.
Feb  2 12:50:10.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:50:10.693: INFO: namespace: e2e-tests-replication-controller-s8rjz, resource: bindings, ignored listing per whitelist
Feb  2 12:50:10.733: INFO: namespace e2e-tests-replication-controller-s8rjz deletion completed in 9.397192216s

• [SLOW TEST:15.942 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:50:10.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-8caf7fe8-45ba-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 12:50:11.502: INFO: Waiting up to 5m0s for pod "pod-configmaps-8cb13ac3-45ba-11ea-8b99-0242ac110005" in namespace "e2e-tests-configmap-fbs6v" to be "success or failure"
Feb  2 12:50:11.509: INFO: Pod "pod-configmaps-8cb13ac3-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.036886ms
Feb  2 12:50:14.049: INFO: Pod "pod-configmaps-8cb13ac3-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.547436946s
Feb  2 12:50:16.073: INFO: Pod "pod-configmaps-8cb13ac3-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571261112s
Feb  2 12:50:18.081: INFO: Pod "pod-configmaps-8cb13ac3-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578722664s
Feb  2 12:50:20.099: INFO: Pod "pod-configmaps-8cb13ac3-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.596710662s
Feb  2 12:50:22.113: INFO: Pod "pod-configmaps-8cb13ac3-45ba-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.610755309s
STEP: Saw pod success
Feb  2 12:50:22.113: INFO: Pod "pod-configmaps-8cb13ac3-45ba-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:50:22.121: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8cb13ac3-45ba-11ea-8b99-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  2 12:50:22.349: INFO: Waiting for pod pod-configmaps-8cb13ac3-45ba-11ea-8b99-0242ac110005 to disappear
Feb  2 12:50:22.368: INFO: Pod pod-configmaps-8cb13ac3-45ba-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:50:22.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fbs6v" for this suite.
Feb  2 12:50:28.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:50:28.779: INFO: namespace: e2e-tests-configmap-fbs6v, resource: bindings, ignored listing per whitelist
Feb  2 12:50:29.093: INFO: namespace e2e-tests-configmap-fbs6v deletion completed in 6.712843227s

• [SLOW TEST:18.360 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:50:29.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 12:50:29.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-974636e1-45ba-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-vj5wt" to be "success or failure"
Feb  2 12:50:29.310: INFO: Pod "downwardapi-volume-974636e1-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.274355ms
Feb  2 12:50:31.334: INFO: Pod "downwardapi-volume-974636e1-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035087019s
Feb  2 12:50:33.350: INFO: Pod "downwardapi-volume-974636e1-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05082749s
Feb  2 12:50:35.552: INFO: Pod "downwardapi-volume-974636e1-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253119735s
Feb  2 12:50:37.566: INFO: Pod "downwardapi-volume-974636e1-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.266823982s
Feb  2 12:50:39.936: INFO: Pod "downwardapi-volume-974636e1-45ba-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.637226959s
STEP: Saw pod success
Feb  2 12:50:39.936: INFO: Pod "downwardapi-volume-974636e1-45ba-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:50:39.943: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-974636e1-45ba-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 12:50:40.308: INFO: Waiting for pod downwardapi-volume-974636e1-45ba-11ea-8b99-0242ac110005 to disappear
Feb  2 12:50:40.324: INFO: Pod downwardapi-volume-974636e1-45ba-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:50:40.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vj5wt" for this suite.
Feb  2 12:50:46.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:50:46.613: INFO: namespace: e2e-tests-downward-api-vj5wt, resource: bindings, ignored listing per whitelist
Feb  2 12:50:46.650: INFO: namespace e2e-tests-downward-api-vj5wt deletion completed in 6.315269819s

• [SLOW TEST:17.557 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:50:46.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-a1c3b51e-45ba-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 12:50:46.872: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1c53b7e-45ba-11ea-8b99-0242ac110005" in namespace "e2e-tests-configmap-hjhdl" to be "success or failure"
Feb  2 12:50:46.891: INFO: Pod "pod-configmaps-a1c53b7e-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.23394ms
Feb  2 12:50:48.921: INFO: Pod "pod-configmaps-a1c53b7e-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04845185s
Feb  2 12:50:50.936: INFO: Pod "pod-configmaps-a1c53b7e-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063000051s
Feb  2 12:50:52.951: INFO: Pod "pod-configmaps-a1c53b7e-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078762168s
Feb  2 12:50:54.968: INFO: Pod "pod-configmaps-a1c53b7e-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095108567s
Feb  2 12:50:56.989: INFO: Pod "pod-configmaps-a1c53b7e-45ba-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116684715s
STEP: Saw pod success
Feb  2 12:50:56.989: INFO: Pod "pod-configmaps-a1c53b7e-45ba-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:50:56.997: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a1c53b7e-45ba-11ea-8b99-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  2 12:50:57.102: INFO: Waiting for pod pod-configmaps-a1c53b7e-45ba-11ea-8b99-0242ac110005 to disappear
Feb  2 12:50:57.352: INFO: Pod pod-configmaps-a1c53b7e-45ba-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:50:57.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-hjhdl" for this suite.
Feb  2 12:51:03.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:51:03.580: INFO: namespace: e2e-tests-configmap-hjhdl, resource: bindings, ignored listing per whitelist
Feb  2 12:51:03.656: INFO: namespace e2e-tests-configmap-hjhdl deletion completed in 6.28988517s

• [SLOW TEST:17.006 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:51:03.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  2 12:51:04.262: INFO: Waiting up to 5m0s for pod "pod-ac10a2d7-45ba-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-sx8v5" to be "success or failure"
Feb  2 12:51:04.277: INFO: Pod "pod-ac10a2d7-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.806077ms
Feb  2 12:51:06.283: INFO: Pod "pod-ac10a2d7-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021039313s
Feb  2 12:51:08.297: INFO: Pod "pod-ac10a2d7-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034126466s
Feb  2 12:51:10.340: INFO: Pod "pod-ac10a2d7-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077724566s
Feb  2 12:51:12.414: INFO: Pod "pod-ac10a2d7-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152056169s
Feb  2 12:51:14.558: INFO: Pod "pod-ac10a2d7-45ba-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.295741687s
STEP: Saw pod success
Feb  2 12:51:14.558: INFO: Pod "pod-ac10a2d7-45ba-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:51:14.596: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ac10a2d7-45ba-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 12:51:14.833: INFO: Waiting for pod pod-ac10a2d7-45ba-11ea-8b99-0242ac110005 to disappear
Feb  2 12:51:14.842: INFO: Pod pod-ac10a2d7-45ba-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:51:14.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sx8v5" for this suite.
Feb  2 12:51:21.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:51:21.251: INFO: namespace: e2e-tests-emptydir-sx8v5, resource: bindings, ignored listing per whitelist
Feb  2 12:51:21.345: INFO: namespace e2e-tests-emptydir-sx8v5 deletion completed in 6.492383764s

• [SLOW TEST:17.687 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:51:21.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-b67e06c6-45ba-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 12:51:21.639: INFO: Waiting up to 5m0s for pod "pod-secrets-b67fc345-45ba-11ea-8b99-0242ac110005" in namespace "e2e-tests-secrets-m9j5t" to be "success or failure"
Feb  2 12:51:21.675: INFO: Pod "pod-secrets-b67fc345-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.59811ms
Feb  2 12:51:23.717: INFO: Pod "pod-secrets-b67fc345-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07792406s
Feb  2 12:51:25.733: INFO: Pod "pod-secrets-b67fc345-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094579709s
Feb  2 12:51:27.744: INFO: Pod "pod-secrets-b67fc345-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105705058s
Feb  2 12:51:30.145: INFO: Pod "pod-secrets-b67fc345-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.506123213s
Feb  2 12:51:32.169: INFO: Pod "pod-secrets-b67fc345-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.530109444s
Feb  2 12:51:34.217: INFO: Pod "pod-secrets-b67fc345-45ba-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.578512731s
STEP: Saw pod success
Feb  2 12:51:34.217: INFO: Pod "pod-secrets-b67fc345-45ba-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:51:34.231: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b67fc345-45ba-11ea-8b99-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  2 12:51:34.371: INFO: Waiting for pod pod-secrets-b67fc345-45ba-11ea-8b99-0242ac110005 to disappear
Feb  2 12:51:34.405: INFO: Pod pod-secrets-b67fc345-45ba-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:51:34.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-m9j5t" for this suite.
Feb  2 12:51:40.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:51:40.699: INFO: namespace: e2e-tests-secrets-m9j5t, resource: bindings, ignored listing per whitelist
Feb  2 12:51:40.731: INFO: namespace e2e-tests-secrets-m9j5t deletion completed in 6.31179202s

• [SLOW TEST:19.386 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:51:40.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-c1f82881-45ba-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 12:51:40.972: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c200ba42-45ba-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-lnx7g" to be "success or failure"
Feb  2 12:51:41.165: INFO: Pod "pod-projected-configmaps-c200ba42-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 193.19634ms
Feb  2 12:51:43.307: INFO: Pod "pod-projected-configmaps-c200ba42-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335061758s
Feb  2 12:51:45.333: INFO: Pod "pod-projected-configmaps-c200ba42-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.361150976s
Feb  2 12:51:47.396: INFO: Pod "pod-projected-configmaps-c200ba42-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424719013s
Feb  2 12:51:49.428: INFO: Pod "pod-projected-configmaps-c200ba42-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.456603166s
Feb  2 12:51:51.440: INFO: Pod "pod-projected-configmaps-c200ba42-45ba-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.468328183s
STEP: Saw pod success
Feb  2 12:51:51.440: INFO: Pod "pod-projected-configmaps-c200ba42-45ba-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:51:51.444: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-c200ba42-45ba-11ea-8b99-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  2 12:51:51.612: INFO: Waiting for pod pod-projected-configmaps-c200ba42-45ba-11ea-8b99-0242ac110005 to disappear
Feb  2 12:51:51.624: INFO: Pod pod-projected-configmaps-c200ba42-45ba-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:51:51.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lnx7g" for this suite.
Feb  2 12:51:57.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:51:57.699: INFO: namespace: e2e-tests-projected-lnx7g, resource: bindings, ignored listing per whitelist
Feb  2 12:51:57.883: INFO: namespace e2e-tests-projected-lnx7g deletion completed in 6.247901851s

• [SLOW TEST:17.151 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:51:57.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-slk9
STEP: Creating a pod to test atomic-volume-subpath
Feb  2 12:51:58.167: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-slk9" in namespace "e2e-tests-subpath-v42f7" to be "success or failure"
Feb  2 12:51:58.178: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.899934ms
Feb  2 12:52:00.284: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11667006s
Feb  2 12:52:02.373: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205756693s
Feb  2 12:52:04.739: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.571952279s
Feb  2 12:52:06.785: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.617797873s
Feb  2 12:52:08.817: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.649851797s
Feb  2 12:52:10.840: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.672338606s
Feb  2 12:52:12.865: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Running", Reason="", readiness=true. Elapsed: 14.697338713s
Feb  2 12:52:14.994: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Running", Reason="", readiness=false. Elapsed: 16.826485323s
Feb  2 12:52:17.009: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Running", Reason="", readiness=false. Elapsed: 18.841398329s
Feb  2 12:52:19.050: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Running", Reason="", readiness=false. Elapsed: 20.882665089s
Feb  2 12:52:21.067: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Running", Reason="", readiness=false. Elapsed: 22.899443247s
Feb  2 12:52:23.195: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Running", Reason="", readiness=false. Elapsed: 25.027497872s
Feb  2 12:52:25.216: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Running", Reason="", readiness=false. Elapsed: 27.048634332s
Feb  2 12:52:27.232: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Running", Reason="", readiness=false. Elapsed: 29.065270784s
Feb  2 12:52:29.287: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Running", Reason="", readiness=false. Elapsed: 31.120280407s
Feb  2 12:52:31.301: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Running", Reason="", readiness=false. Elapsed: 33.134174093s
Feb  2 12:52:33.331: INFO: Pod "pod-subpath-test-secret-slk9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.16414865s
STEP: Saw pod success
Feb  2 12:52:33.331: INFO: Pod "pod-subpath-test-secret-slk9" satisfied condition "success or failure"
Feb  2 12:52:33.345: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-slk9 container test-container-subpath-secret-slk9: 
STEP: delete the pod
Feb  2 12:52:33.565: INFO: Waiting for pod pod-subpath-test-secret-slk9 to disappear
Feb  2 12:52:33.598: INFO: Pod pod-subpath-test-secret-slk9 no longer exists
STEP: Deleting pod pod-subpath-test-secret-slk9
Feb  2 12:52:33.599: INFO: Deleting pod "pod-subpath-test-secret-slk9" in namespace "e2e-tests-subpath-v42f7"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:52:33.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-v42f7" for this suite.
Feb  2 12:52:41.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:52:41.832: INFO: namespace: e2e-tests-subpath-v42f7, resource: bindings, ignored listing per whitelist
Feb  2 12:52:41.832: INFO: namespace e2e-tests-subpath-v42f7 deletion completed in 8.217935103s

• [SLOW TEST:43.949 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:52:41.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  2 12:52:42.090: INFO: Waiting up to 5m0s for pod "pod-e671c82b-45ba-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-rns7r" to be "success or failure"
Feb  2 12:52:42.103: INFO: Pod "pod-e671c82b-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.712882ms
Feb  2 12:52:44.552: INFO: Pod "pod-e671c82b-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.462430797s
Feb  2 12:52:46.598: INFO: Pod "pod-e671c82b-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.507948076s
Feb  2 12:52:48.649: INFO: Pod "pod-e671c82b-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.558967643s
Feb  2 12:52:50.671: INFO: Pod "pod-e671c82b-45ba-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.581245754s
Feb  2 12:52:52.687: INFO: Pod "pod-e671c82b-45ba-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.596906309s
STEP: Saw pod success
Feb  2 12:52:52.687: INFO: Pod "pod-e671c82b-45ba-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:52:52.692: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e671c82b-45ba-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 12:52:52.760: INFO: Waiting for pod pod-e671c82b-45ba-11ea-8b99-0242ac110005 to disappear
Feb  2 12:52:52.788: INFO: Pod pod-e671c82b-45ba-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:52:52.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rns7r" for this suite.
Feb  2 12:52:58.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:52:59.002: INFO: namespace: e2e-tests-emptydir-rns7r, resource: bindings, ignored listing per whitelist
Feb  2 12:52:59.155: INFO: namespace e2e-tests-emptydir-rns7r deletion completed in 6.351388783s

• [SLOW TEST:17.322 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:52:59.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-f0c9f6de-45ba-11ea-8b99-0242ac110005
STEP: Creating secret with name s-test-opt-upd-f0c9f760-45ba-11ea-8b99-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f0c9f6de-45ba-11ea-8b99-0242ac110005
STEP: Updating secret s-test-opt-upd-f0c9f760-45ba-11ea-8b99-0242ac110005
STEP: Creating secret with name s-test-opt-create-f0c9f784-45ba-11ea-8b99-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:53:16.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7b5ll" for this suite.
Feb  2 12:53:42.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:53:42.216: INFO: namespace: e2e-tests-secrets-7b5ll, resource: bindings, ignored listing per whitelist
Feb  2 12:53:42.283: INFO: namespace e2e-tests-secrets-7b5ll deletion completed in 26.259916086s

• [SLOW TEST:43.127 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:53:42.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb  2 12:53:50.661: INFO: Pod pod-hostip-0a7320d9-45bb-11ea-8b99-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:53:50.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vz5w8" for this suite.
Feb  2 12:54:14.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:54:14.798: INFO: namespace: e2e-tests-pods-vz5w8, resource: bindings, ignored listing per whitelist
Feb  2 12:54:14.933: INFO: namespace e2e-tests-pods-vz5w8 deletion completed in 24.264776048s

• [SLOW TEST:32.650 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:54:14.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Feb  2 12:54:15.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:54:17.519: INFO: stderr: ""
Feb  2 12:54:17.519: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  2 12:54:17.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:54:17.949: INFO: stderr: ""
Feb  2 12:54:17.950: INFO: stdout: "update-demo-nautilus-f9znx update-demo-nautilus-pgsts "
Feb  2 12:54:17.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9znx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:54:18.147: INFO: stderr: ""
Feb  2 12:54:18.147: INFO: stdout: ""
Feb  2 12:54:18.147: INFO: update-demo-nautilus-f9znx is created but not running
Feb  2 12:54:23.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:54:23.357: INFO: stderr: ""
Feb  2 12:54:23.357: INFO: stdout: "update-demo-nautilus-f9znx update-demo-nautilus-pgsts "
Feb  2 12:54:23.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9znx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:54:23.490: INFO: stderr: ""
Feb  2 12:54:23.490: INFO: stdout: ""
Feb  2 12:54:23.490: INFO: update-demo-nautilus-f9znx is created but not running
Feb  2 12:54:28.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:54:28.715: INFO: stderr: ""
Feb  2 12:54:28.715: INFO: stdout: "update-demo-nautilus-f9znx update-demo-nautilus-pgsts "
Feb  2 12:54:28.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9znx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:54:29.025: INFO: stderr: ""
Feb  2 12:54:29.025: INFO: stdout: ""
Feb  2 12:54:29.025: INFO: update-demo-nautilus-f9znx is created but not running
Feb  2 12:54:34.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:54:34.165: INFO: stderr: ""
Feb  2 12:54:34.165: INFO: stdout: "update-demo-nautilus-f9znx update-demo-nautilus-pgsts "
Feb  2 12:54:34.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9znx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:54:34.278: INFO: stderr: ""
Feb  2 12:54:34.278: INFO: stdout: "true"
Feb  2 12:54:34.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9znx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:54:34.398: INFO: stderr: ""
Feb  2 12:54:34.398: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 12:54:34.398: INFO: validating pod update-demo-nautilus-f9znx
Feb  2 12:54:34.413: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 12:54:34.413: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 12:54:34.413: INFO: update-demo-nautilus-f9znx is verified up and running
Feb  2 12:54:34.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pgsts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:54:34.626: INFO: stderr: ""
Feb  2 12:54:34.626: INFO: stdout: "true"
Feb  2 12:54:34.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pgsts -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:54:34.740: INFO: stderr: ""
Feb  2 12:54:34.740: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 12:54:34.740: INFO: validating pod update-demo-nautilus-pgsts
Feb  2 12:54:34.758: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 12:54:34.758: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 12:54:34.758: INFO: update-demo-nautilus-pgsts is verified up and running
STEP: rolling-update to new replication controller
Feb  2 12:54:34.761: INFO: scanned /root for discovery docs: 
Feb  2 12:54:34.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:55:24.950: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  2 12:55:24.950: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  2 12:55:24.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:55:25.216: INFO: stderr: ""
Feb  2 12:55:25.216: INFO: stdout: "update-demo-kitten-glhgx update-demo-kitten-xqqmp "
Feb  2 12:55:25.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-glhgx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:55:25.414: INFO: stderr: ""
Feb  2 12:55:25.414: INFO: stdout: "true"
Feb  2 12:55:25.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-glhgx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:55:25.538: INFO: stderr: ""
Feb  2 12:55:25.538: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  2 12:55:25.538: INFO: validating pod update-demo-kitten-glhgx
Feb  2 12:55:25.556: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  2 12:55:25.556: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  2 12:55:25.556: INFO: update-demo-kitten-glhgx is verified up and running
Feb  2 12:55:25.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xqqmp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:55:25.664: INFO: stderr: ""
Feb  2 12:55:25.664: INFO: stdout: "true"
Feb  2 12:55:25.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xqqmp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkfwk'
Feb  2 12:55:25.746: INFO: stderr: ""
Feb  2 12:55:25.746: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  2 12:55:25.746: INFO: validating pod update-demo-kitten-xqqmp
Feb  2 12:55:25.754: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  2 12:55:25.754: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  2 12:55:25.754: INFO: update-demo-kitten-xqqmp is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:55:25.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bkfwk" for this suite.
Feb  2 12:55:51.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:55:51.998: INFO: namespace: e2e-tests-kubectl-bkfwk, resource: bindings, ignored listing per whitelist
Feb  2 12:55:52.055: INFO: namespace e2e-tests-kubectl-bkfwk deletion completed in 26.295046794s

• [SLOW TEST:97.122 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:55:52.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-57ce10a9-45bb-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 12:55:52.281: INFO: Waiting up to 5m0s for pod "pod-configmaps-57cf862c-45bb-11ea-8b99-0242ac110005" in namespace "e2e-tests-configmap-xzvsm" to be "success or failure"
Feb  2 12:55:52.297: INFO: Pod "pod-configmaps-57cf862c-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.129438ms
Feb  2 12:55:54.332: INFO: Pod "pod-configmaps-57cf862c-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050977673s
Feb  2 12:55:56.372: INFO: Pod "pod-configmaps-57cf862c-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090076045s
Feb  2 12:55:59.711: INFO: Pod "pod-configmaps-57cf862c-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.429225142s
Feb  2 12:56:01.842: INFO: Pod "pod-configmaps-57cf862c-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.56106388s
Feb  2 12:56:03.873: INFO: Pod "pod-configmaps-57cf862c-45bb-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.592048109s
STEP: Saw pod success
Feb  2 12:56:03.874: INFO: Pod "pod-configmaps-57cf862c-45bb-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:56:03.894: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-57cf862c-45bb-11ea-8b99-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  2 12:56:04.614: INFO: Waiting for pod pod-configmaps-57cf862c-45bb-11ea-8b99-0242ac110005 to disappear
Feb  2 12:56:04.696: INFO: Pod pod-configmaps-57cf862c-45bb-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:56:04.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-xzvsm" for this suite.
Feb  2 12:56:12.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:56:12.835: INFO: namespace: e2e-tests-configmap-xzvsm, resource: bindings, ignored listing per whitelist
Feb  2 12:56:12.903: INFO: namespace e2e-tests-configmap-xzvsm deletion completed in 8.191206909s

• [SLOW TEST:20.848 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:56:12.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-642f4953-45bb-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  2 12:56:13.083: INFO: Waiting up to 5m0s for pod "pod-configmaps-64319792-45bb-11ea-8b99-0242ac110005" in namespace "e2e-tests-configmap-qfnkt" to be "success or failure"
Feb  2 12:56:13.094: INFO: Pod "pod-configmaps-64319792-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.058449ms
Feb  2 12:56:15.133: INFO: Pod "pod-configmaps-64319792-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050466559s
Feb  2 12:56:17.155: INFO: Pod "pod-configmaps-64319792-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072617742s
Feb  2 12:56:19.329: INFO: Pod "pod-configmaps-64319792-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.246160709s
Feb  2 12:56:21.496: INFO: Pod "pod-configmaps-64319792-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.413610289s
Feb  2 12:56:23.596: INFO: Pod "pod-configmaps-64319792-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.513148896s
Feb  2 12:56:25.612: INFO: Pod "pod-configmaps-64319792-45bb-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.529289559s
STEP: Saw pod success
Feb  2 12:56:25.612: INFO: Pod "pod-configmaps-64319792-45bb-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:56:25.617: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-64319792-45bb-11ea-8b99-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  2 12:56:25.787: INFO: Waiting for pod pod-configmaps-64319792-45bb-11ea-8b99-0242ac110005 to disappear
Feb  2 12:56:25.802: INFO: Pod pod-configmaps-64319792-45bb-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:56:25.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-qfnkt" for this suite.
Feb  2 12:56:32.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:56:32.236: INFO: namespace: e2e-tests-configmap-qfnkt, resource: bindings, ignored listing per whitelist
Feb  2 12:56:32.306: INFO: namespace e2e-tests-configmap-qfnkt deletion completed in 6.488355573s

• [SLOW TEST:19.403 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:56:32.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  2 12:56:32.696: INFO: Waiting up to 5m0s for pod "pod-6fd59fa1-45bb-11ea-8b99-0242ac110005" in namespace "e2e-tests-emptydir-pvjbb" to be "success or failure"
Feb  2 12:56:32.708: INFO: Pod "pod-6fd59fa1-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.804007ms
Feb  2 12:56:34.727: INFO: Pod "pod-6fd59fa1-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031441386s
Feb  2 12:56:36.747: INFO: Pod "pod-6fd59fa1-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051251226s
Feb  2 12:56:38.948: INFO: Pod "pod-6fd59fa1-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25153335s
Feb  2 12:56:42.318: INFO: Pod "pod-6fd59fa1-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.62188308s
Feb  2 12:56:44.330: INFO: Pod "pod-6fd59fa1-45bb-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.633493335s
STEP: Saw pod success
Feb  2 12:56:44.330: INFO: Pod "pod-6fd59fa1-45bb-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 12:56:44.339: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6fd59fa1-45bb-11ea-8b99-0242ac110005 container test-container: 
STEP: delete the pod
Feb  2 12:56:45.215: INFO: Waiting for pod pod-6fd59fa1-45bb-11ea-8b99-0242ac110005 to disappear
Feb  2 12:56:45.456: INFO: Pod pod-6fd59fa1-45bb-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:56:45.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pvjbb" for this suite.
Feb  2 12:56:51.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:56:51.727: INFO: namespace: e2e-tests-emptydir-pvjbb, resource: bindings, ignored listing per whitelist
Feb  2 12:56:51.752: INFO: namespace e2e-tests-emptydir-pvjbb deletion completed in 6.282651932s

• [SLOW TEST:19.446 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:56:51.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-mdhrn
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb  2 12:56:52.347: INFO: Found 0 stateful pods, waiting for 3
Feb  2 12:57:02.364: INFO: Found 1 stateful pods, waiting for 3
Feb  2 12:57:13.014: INFO: Found 2 stateful pods, waiting for 3
Feb  2 12:57:22.471: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 12:57:22.471: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 12:57:22.471: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  2 12:57:32.383: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 12:57:32.383: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 12:57:32.383: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  2 12:57:32.493: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  2 12:57:43.555: INFO: Updating stateful set ss2
Feb  2 12:57:43.877: INFO: Waiting for Pod e2e-tests-statefulset-mdhrn/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  2 12:57:54.284: INFO: Found 2 stateful pods, waiting for 3
Feb  2 12:58:05.084: INFO: Found 2 stateful pods, waiting for 3
Feb  2 12:58:14.305: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 12:58:14.306: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 12:58:14.306: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  2 12:58:24.294: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 12:58:24.294: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 12:58:24.294: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  2 12:58:24.416: INFO: Updating stateful set ss2
Feb  2 12:58:24.426: INFO: Waiting for Pod e2e-tests-statefulset-mdhrn/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 12:58:34.476: INFO: Updating stateful set ss2
Feb  2 12:58:34.518: INFO: Waiting for StatefulSet e2e-tests-statefulset-mdhrn/ss2 to complete update
Feb  2 12:58:34.518: INFO: Waiting for Pod e2e-tests-statefulset-mdhrn/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 12:58:44.911: INFO: Waiting for StatefulSet e2e-tests-statefulset-mdhrn/ss2 to complete update
Feb  2 12:58:44.911: INFO: Waiting for Pod e2e-tests-statefulset-mdhrn/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 12:58:55.083: INFO: Waiting for StatefulSet e2e-tests-statefulset-mdhrn/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  2 12:59:04.864: INFO: Deleting all statefulset in ns e2e-tests-statefulset-mdhrn
Feb  2 12:59:04.871: INFO: Scaling statefulset ss2 to 0
Feb  2 12:59:35.163: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 12:59:35.175: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:59:35.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-mdhrn" for this suite.
Feb  2 12:59:45.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:59:45.680: INFO: namespace: e2e-tests-statefulset-mdhrn, resource: bindings, ignored listing per whitelist
Feb  2 12:59:45.738: INFO: namespace e2e-tests-statefulset-mdhrn deletion completed in 10.478088144s

• [SLOW TEST:173.986 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:59:45.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb  2 12:59:46.100: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix396433273/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 12:59:46.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2xxgs" for this suite.
Feb  2 12:59:52.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 12:59:52.390: INFO: namespace: e2e-tests-kubectl-2xxgs, resource: bindings, ignored listing per whitelist
Feb  2 12:59:52.725: INFO: namespace e2e-tests-kubectl-2xxgs deletion completed in 6.452759599s

• [SLOW TEST:6.986 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 12:59:52.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  2 12:59:53.017: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e74886da-45bb-11ea-8b99-0242ac110005" in namespace "e2e-tests-projected-dwtzj" to be "success or failure"
Feb  2 12:59:53.180: INFO: Pod "downwardapi-volume-e74886da-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 162.968667ms
Feb  2 12:59:55.902: INFO: Pod "downwardapi-volume-e74886da-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.884499884s
Feb  2 12:59:57.923: INFO: Pod "downwardapi-volume-e74886da-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.9053195s
Feb  2 12:59:59.934: INFO: Pod "downwardapi-volume-e74886da-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.916157842s
Feb  2 13:00:02.282: INFO: Pod "downwardapi-volume-e74886da-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.264281064s
Feb  2 13:00:04.313: INFO: Pod "downwardapi-volume-e74886da-45bb-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.295410771s
Feb  2 13:00:06.327: INFO: Pod "downwardapi-volume-e74886da-45bb-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.309752809s
STEP: Saw pod success
Feb  2 13:00:06.327: INFO: Pod "downwardapi-volume-e74886da-45bb-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 13:00:06.334: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e74886da-45bb-11ea-8b99-0242ac110005 container client-container: 
STEP: delete the pod
Feb  2 13:00:06.465: INFO: Waiting for pod downwardapi-volume-e74886da-45bb-11ea-8b99-0242ac110005 to disappear
Feb  2 13:00:06.478: INFO: Pod downwardapi-volume-e74886da-45bb-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 13:00:06.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dwtzj" for this suite.
Feb  2 13:00:12.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:00:12.717: INFO: namespace: e2e-tests-projected-dwtzj, resource: bindings, ignored listing per whitelist
Feb  2 13:00:12.770: INFO: namespace e2e-tests-projected-dwtzj deletion completed in 6.279371234s

• [SLOW TEST:20.044 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 13:00:12.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-7b8p9
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-7b8p9
STEP: Deleting pre-stop pod
Feb  2 13:00:42.243: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 13:00:42.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-7b8p9" for this suite.
Feb  2 13:01:22.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:01:22.825: INFO: namespace: e2e-tests-prestop-7b8p9, resource: bindings, ignored listing per whitelist
Feb  2 13:01:22.884: INFO: namespace e2e-tests-prestop-7b8p9 deletion completed in 40.44527811s

• [SLOW TEST:70.114 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 13:01:22.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-z2b8g
Feb  2 13:01:33.458: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-z2b8g
STEP: checking the pod's current state and verifying that restartCount is present
Feb  2 13:01:33.465: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 13:05:35.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-z2b8g" for this suite.
Feb  2 13:05:41.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:05:41.309: INFO: namespace: e2e-tests-container-probe-z2b8g, resource: bindings, ignored listing per whitelist
Feb  2 13:05:41.464: INFO: namespace e2e-tests-container-probe-z2b8g deletion completed in 6.255689212s

• [SLOW TEST:258.579 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 13:05:41.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  2 13:05:41.818: INFO: Number of nodes with available pods: 0
Feb  2 13:05:41.819: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:43.550: INFO: Number of nodes with available pods: 0
Feb  2 13:05:43.550: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:43.875: INFO: Number of nodes with available pods: 0
Feb  2 13:05:43.875: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:44.839: INFO: Number of nodes with available pods: 0
Feb  2 13:05:44.839: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:45.868: INFO: Number of nodes with available pods: 0
Feb  2 13:05:45.868: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:47.809: INFO: Number of nodes with available pods: 0
Feb  2 13:05:47.810: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:47.858: INFO: Number of nodes with available pods: 0
Feb  2 13:05:47.858: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:48.970: INFO: Number of nodes with available pods: 0
Feb  2 13:05:48.970: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:49.847: INFO: Number of nodes with available pods: 1
Feb  2 13:05:49.847: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb  2 13:05:50.011: INFO: Number of nodes with available pods: 0
Feb  2 13:05:50.011: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:51.036: INFO: Number of nodes with available pods: 0
Feb  2 13:05:51.036: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:52.061: INFO: Number of nodes with available pods: 0
Feb  2 13:05:52.061: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:53.226: INFO: Number of nodes with available pods: 0
Feb  2 13:05:53.226: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:54.023: INFO: Number of nodes with available pods: 0
Feb  2 13:05:54.023: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:55.059: INFO: Number of nodes with available pods: 0
Feb  2 13:05:55.059: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:56.028: INFO: Number of nodes with available pods: 0
Feb  2 13:05:56.028: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:57.565: INFO: Number of nodes with available pods: 0
Feb  2 13:05:57.565: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:58.033: INFO: Number of nodes with available pods: 0
Feb  2 13:05:58.033: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:05:59.060: INFO: Number of nodes with available pods: 0
Feb  2 13:05:59.060: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:06:00.042: INFO: Number of nodes with available pods: 0
Feb  2 13:06:00.042: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  2 13:06:01.064: INFO: Number of nodes with available pods: 1
Feb  2 13:06:01.064: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-9tqsj, will wait for the garbage collector to delete the pods
Feb  2 13:06:01.235: INFO: Deleting DaemonSet.extensions daemon-set took: 79.996965ms
Feb  2 13:06:01.335: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.264573ms
Feb  2 13:06:12.654: INFO: Number of nodes with available pods: 0
Feb  2 13:06:12.654: INFO: Number of running nodes: 0, number of available pods: 0
Feb  2 13:06:12.665: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9tqsj/daemonsets","resourceVersion":"20311785"},"items":null}

Feb  2 13:06:12.680: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9tqsj/pods","resourceVersion":"20311786"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 13:06:12.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-9tqsj" for this suite.
Feb  2 13:06:20.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:06:20.985: INFO: namespace: e2e-tests-daemonsets-9tqsj, resource: bindings, ignored listing per whitelist
Feb  2 13:06:21.052: INFO: namespace e2e-tests-daemonsets-9tqsj deletion completed in 8.276065761s

• [SLOW TEST:39.587 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 13:06:21.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  2 13:06:21.270: INFO: Waiting up to 5m0s for pod "downward-api-ceb7990c-45bc-11ea-8b99-0242ac110005" in namespace "e2e-tests-downward-api-l24l9" to be "success or failure"
Feb  2 13:06:21.286: INFO: Pod "downward-api-ceb7990c-45bc-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.40151ms
Feb  2 13:06:23.301: INFO: Pod "downward-api-ceb7990c-45bc-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031283053s
Feb  2 13:06:25.319: INFO: Pod "downward-api-ceb7990c-45bc-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048749219s
Feb  2 13:06:27.683: INFO: Pod "downward-api-ceb7990c-45bc-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413390168s
Feb  2 13:06:29.703: INFO: Pod "downward-api-ceb7990c-45bc-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.433299985s
Feb  2 13:06:31.872: INFO: Pod "downward-api-ceb7990c-45bc-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.601750438s
Feb  2 13:06:34.274: INFO: Pod "downward-api-ceb7990c-45bc-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.004360975s
STEP: Saw pod success
Feb  2 13:06:34.274: INFO: Pod "downward-api-ceb7990c-45bc-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 13:06:34.288: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-ceb7990c-45bc-11ea-8b99-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  2 13:06:35.019: INFO: Waiting for pod downward-api-ceb7990c-45bc-11ea-8b99-0242ac110005 to disappear
Feb  2 13:06:35.045: INFO: Pod downward-api-ceb7990c-45bc-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 13:06:35.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-l24l9" for this suite.
Feb  2 13:06:41.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:06:41.273: INFO: namespace: e2e-tests-downward-api-l24l9, resource: bindings, ignored listing per whitelist
Feb  2 13:06:41.351: INFO: namespace e2e-tests-downward-api-l24l9 deletion completed in 6.287603402s

• [SLOW TEST:20.299 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 13:06:41.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-dac546e7-45bc-11ea-8b99-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  2 13:06:41.553: INFO: Waiting up to 5m0s for pod "pod-secrets-dace6ea4-45bc-11ea-8b99-0242ac110005" in namespace "e2e-tests-secrets-zgkpv" to be "success or failure"
Feb  2 13:06:41.576: INFO: Pod "pod-secrets-dace6ea4-45bc-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.903396ms
Feb  2 13:06:43.594: INFO: Pod "pod-secrets-dace6ea4-45bc-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040859844s
Feb  2 13:06:45.608: INFO: Pod "pod-secrets-dace6ea4-45bc-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054325722s
Feb  2 13:06:47.621: INFO: Pod "pod-secrets-dace6ea4-45bc-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067603136s
Feb  2 13:06:50.802: INFO: Pod "pod-secrets-dace6ea4-45bc-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.249064662s
Feb  2 13:06:52.817: INFO: Pod "pod-secrets-dace6ea4-45bc-11ea-8b99-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.263610143s
Feb  2 13:06:54.905: INFO: Pod "pod-secrets-dace6ea4-45bc-11ea-8b99-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.351630754s
STEP: Saw pod success
Feb  2 13:06:54.905: INFO: Pod "pod-secrets-dace6ea4-45bc-11ea-8b99-0242ac110005" satisfied condition "success or failure"
Feb  2 13:06:54.940: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-dace6ea4-45bc-11ea-8b99-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  2 13:06:55.139: INFO: Waiting for pod pod-secrets-dace6ea4-45bc-11ea-8b99-0242ac110005 to disappear
Feb  2 13:06:55.147: INFO: Pod pod-secrets-dace6ea4-45bc-11ea-8b99-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 13:06:55.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zgkpv" for this suite.
Feb  2 13:07:01.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:07:01.499: INFO: namespace: e2e-tests-secrets-zgkpv, resource: bindings, ignored listing per whitelist
Feb  2 13:07:01.779: INFO: namespace e2e-tests-secrets-zgkpv deletion completed in 6.626115472s

• [SLOW TEST:20.427 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  2 13:07:01.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb  2 13:07:02.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:04.193: INFO: stderr: ""
Feb  2 13:07:04.193: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  2 13:07:04.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:04.354: INFO: stderr: ""
Feb  2 13:07:04.354: INFO: stdout: "update-demo-nautilus-47bst update-demo-nautilus-m9zvw "
Feb  2 13:07:04.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47bst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:04.611: INFO: stderr: ""
Feb  2 13:07:04.611: INFO: stdout: ""
Feb  2 13:07:04.611: INFO: update-demo-nautilus-47bst is created but not running
Feb  2 13:07:09.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:10.473: INFO: stderr: ""
Feb  2 13:07:10.473: INFO: stdout: "update-demo-nautilus-47bst update-demo-nautilus-m9zvw "
Feb  2 13:07:10.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47bst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:10.741: INFO: stderr: ""
Feb  2 13:07:10.741: INFO: stdout: ""
Feb  2 13:07:10.741: INFO: update-demo-nautilus-47bst is created but not running
Feb  2 13:07:15.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:15.952: INFO: stderr: ""
Feb  2 13:07:15.952: INFO: stdout: "update-demo-nautilus-47bst update-demo-nautilus-m9zvw "
Feb  2 13:07:15.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47bst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:16.126: INFO: stderr: ""
Feb  2 13:07:16.127: INFO: stdout: "true"
Feb  2 13:07:16.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47bst -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:16.274: INFO: stderr: ""
Feb  2 13:07:16.275: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 13:07:16.275: INFO: validating pod update-demo-nautilus-47bst
Feb  2 13:07:16.316: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 13:07:16.316: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 13:07:16.316: INFO: update-demo-nautilus-47bst is verified up and running
Feb  2 13:07:16.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:16.613: INFO: stderr: ""
Feb  2 13:07:16.613: INFO: stdout: "true"
Feb  2 13:07:16.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:16.769: INFO: stderr: ""
Feb  2 13:07:16.769: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 13:07:16.769: INFO: validating pod update-demo-nautilus-m9zvw
Feb  2 13:07:16.777: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 13:07:16.777: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 13:07:16.777: INFO: update-demo-nautilus-m9zvw is verified up and running
STEP: scaling down the replication controller
Feb  2 13:07:16.779: INFO: scanned /root for discovery docs: 
Feb  2 13:07:16.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:18.172: INFO: stderr: ""
Feb  2 13:07:18.172: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  2 13:07:18.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:18.329: INFO: stderr: ""
Feb  2 13:07:18.329: INFO: stdout: "update-demo-nautilus-47bst update-demo-nautilus-m9zvw "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  2 13:07:23.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:23.473: INFO: stderr: ""
Feb  2 13:07:23.473: INFO: stdout: "update-demo-nautilus-m9zvw "
Feb  2 13:07:23.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:23.572: INFO: stderr: ""
Feb  2 13:07:23.572: INFO: stdout: "true"
Feb  2 13:07:23.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:23.727: INFO: stderr: ""
Feb  2 13:07:23.727: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 13:07:23.727: INFO: validating pod update-demo-nautilus-m9zvw
Feb  2 13:07:23.740: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 13:07:23.740: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 13:07:23.740: INFO: update-demo-nautilus-m9zvw is verified up and running
STEP: scaling up the replication controller
Feb  2 13:07:23.742: INFO: scanned /root for discovery docs: 
Feb  2 13:07:23.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:25.458: INFO: stderr: ""
Feb  2 13:07:25.458: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  2 13:07:25.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:25.788: INFO: stderr: ""
Feb  2 13:07:25.788: INFO: stdout: "update-demo-nautilus-m9zvw update-demo-nautilus-xc8pl "
Feb  2 13:07:25.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:25.961: INFO: stderr: ""
Feb  2 13:07:25.962: INFO: stdout: "true"
Feb  2 13:07:25.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:26.100: INFO: stderr: ""
Feb  2 13:07:26.100: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 13:07:26.100: INFO: validating pod update-demo-nautilus-m9zvw
Feb  2 13:07:26.108: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 13:07:26.108: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 13:07:26.108: INFO: update-demo-nautilus-m9zvw is verified up and running
Feb  2 13:07:26.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xc8pl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:26.202: INFO: stderr: ""
Feb  2 13:07:26.202: INFO: stdout: ""
Feb  2 13:07:26.202: INFO: update-demo-nautilus-xc8pl is created but not running
Feb  2 13:07:31.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:31.549: INFO: stderr: ""
Feb  2 13:07:31.549: INFO: stdout: "update-demo-nautilus-m9zvw update-demo-nautilus-xc8pl "
Feb  2 13:07:31.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:31.777: INFO: stderr: ""
Feb  2 13:07:31.777: INFO: stdout: "true"
Feb  2 13:07:31.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:31.914: INFO: stderr: ""
Feb  2 13:07:31.914: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 13:07:31.914: INFO: validating pod update-demo-nautilus-m9zvw
Feb  2 13:07:31.924: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 13:07:31.924: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 13:07:31.924: INFO: update-demo-nautilus-m9zvw is verified up and running
Feb  2 13:07:31.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xc8pl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:32.162: INFO: stderr: ""
Feb  2 13:07:32.162: INFO: stdout: ""
Feb  2 13:07:32.162: INFO: update-demo-nautilus-xc8pl is created but not running
Feb  2 13:07:37.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:37.296: INFO: stderr: ""
Feb  2 13:07:37.296: INFO: stdout: "update-demo-nautilus-m9zvw update-demo-nautilus-xc8pl "
Feb  2 13:07:37.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:37.424: INFO: stderr: ""
Feb  2 13:07:37.424: INFO: stdout: "true"
Feb  2 13:07:37.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:37.513: INFO: stderr: ""
Feb  2 13:07:37.513: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 13:07:37.513: INFO: validating pod update-demo-nautilus-m9zvw
Feb  2 13:07:37.521: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 13:07:37.521: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 13:07:37.521: INFO: update-demo-nautilus-m9zvw is verified up and running
Feb  2 13:07:37.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xc8pl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:37.632: INFO: stderr: ""
Feb  2 13:07:37.632: INFO: stdout: "true"
Feb  2 13:07:37.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xc8pl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:37.786: INFO: stderr: ""
Feb  2 13:07:37.786: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 13:07:37.786: INFO: validating pod update-demo-nautilus-xc8pl
Feb  2 13:07:37.806: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 13:07:37.806: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 13:07:37.806: INFO: update-demo-nautilus-xc8pl is verified up and running
STEP: using delete to clean up resources
Feb  2 13:07:37.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:38.010: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 13:07:38.011: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  2 13:07:38.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-6wfc8'
Feb  2 13:07:38.332: INFO: stderr: "No resources found.\n"
Feb  2 13:07:38.332: INFO: stdout: ""
Feb  2 13:07:38.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-6wfc8 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  2 13:07:38.538: INFO: stderr: ""
Feb  2 13:07:38.538: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  2 13:07:38.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6wfc8" for this suite.
Feb  2 13:08:02.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:08:02.803: INFO: namespace: e2e-tests-kubectl-6wfc8, resource: bindings, ignored listing per whitelist
Feb  2 13:08:02.903: INFO: namespace e2e-tests-kubectl-6wfc8 deletion completed in 24.328346943s

• [SLOW TEST:61.124 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSFeb  2 13:08:02.904: INFO: Running AfterSuite actions on all nodes
Feb  2 13:08:02.904: INFO: Running AfterSuite actions on node 1
Feb  2 13:08:02.904: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8447.824 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS